00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3923 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3518 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.075 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.077 The recommended git tool is: git 00:00:00.077 using credential 00000000-0000-0000-0000-000000000002 00:00:00.079 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.108 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.151 Using shallow fetch with depth 1 00:00:00.151 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.151 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.225 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.225 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.447 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.459 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.470 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:05.470 > git config core.sparsecheckout # timeout=10 00:00:05.483 > git read-tree -mu HEAD # timeout=10 00:00:05.501 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:05.516 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:05.517 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:05.625 [Pipeline] Start of Pipeline 00:00:05.638 [Pipeline] library 00:00:05.639 Loading library shm_lib@master 00:00:05.640 Library shm_lib@master is cached. Copying from home. 00:00:05.656 [Pipeline] node 00:00:05.679 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.681 [Pipeline] { 00:00:05.691 [Pipeline] catchError 00:00:05.693 [Pipeline] { 00:00:05.707 [Pipeline] wrap 00:00:05.715 [Pipeline] { 00:00:05.722 [Pipeline] stage 00:00:05.724 [Pipeline] { (Prologue) 00:00:05.741 [Pipeline] echo 00:00:05.742 Node: VM-host-SM38 00:00:05.748 [Pipeline] cleanWs 00:00:05.758 [WS-CLEANUP] Deleting project workspace... 00:00:05.758 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.765 [WS-CLEANUP] done 00:00:05.947 [Pipeline] setCustomBuildProperty 00:00:06.058 [Pipeline] httpRequest 00:00:07.660 [Pipeline] echo 00:00:07.661 Sorcerer 10.211.164.101 is alive 00:00:07.671 [Pipeline] retry 00:00:07.672 [Pipeline] { 00:00:07.685 [Pipeline] httpRequest 00:00:07.691 HttpMethod: GET 00:00:07.691 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.692 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:07.711 Response Code: HTTP/1.1 200 OK 00:00:07.712 Success: Status code 200 is in the accepted range: 200,404 00:00:07.712 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.237 [Pipeline] } 00:00:09.257 [Pipeline] // retry 00:00:09.265 [Pipeline] sh 00:00:09.555 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:09.574 [Pipeline] httpRequest 00:00:10.191 [Pipeline] echo 00:00:10.193 Sorcerer 10.211.164.101 is alive 00:00:10.203 [Pipeline] retry 00:00:10.205 [Pipeline] { 00:00:10.219 [Pipeline] httpRequest 00:00:10.224 HttpMethod: GET 00:00:10.224 URL: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:10.225 Sending request to url: http://10.211.164.101/packages/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:00:10.241 Response Code: HTTP/1.1 200 OK 00:00:10.242 Success: Status code 200 is in the accepted range: 200,404 00:00:10.242 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:01:46.905 [Pipeline] } 00:01:46.922 [Pipeline] // retry 00:01:46.929 [Pipeline] sh 00:01:47.216 + tar --no-same-owner -xf spdk_92108e0a2be7a969e8ee761a776a1ea64465759a.tar.gz 00:01:49.799 [Pipeline] sh 00:01:50.082 + git -C spdk log --oneline -n5 00:01:50.082 92108e0a2 fsdev/aio: add support for null IOs 00:01:50.082 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:50.082 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:50.082 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:50.082 aa7c3b1e2 bdev/nvme: changed default config to multipath 00:01:50.104 [Pipeline] withCredentials 00:01:50.116 > git --version # timeout=10 00:01:50.128 > git --version # 'git version 2.39.2' 00:01:50.149 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:50.151 [Pipeline] { 00:01:50.159 [Pipeline] retry 00:01:50.161 [Pipeline] { 00:01:50.177 [Pipeline] sh 00:01:50.462 + git ls-remote http://dpdk.org/git/dpdk main 00:01:50.476 [Pipeline] } 00:01:50.494 [Pipeline] // retry 00:01:50.499 [Pipeline] } 00:01:50.515 [Pipeline] // withCredentials 00:01:50.524 [Pipeline] httpRequest 00:01:50.927 [Pipeline] echo 00:01:50.929 Sorcerer 10.211.164.101 is alive 00:01:50.938 [Pipeline] retry 00:01:50.940 [Pipeline] { 00:01:50.954 [Pipeline] httpRequest 00:01:50.960 HttpMethod: GET 00:01:50.961 URL: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:50.961 Sending request to url: http://10.211.164.101/packages/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:50.963 Response Code: HTTP/1.1 200 OK 00:01:50.963 Success: Status code 200 is in the accepted range: 200,404 00:01:50.964 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:55.163 [Pipeline] } 00:01:55.181 [Pipeline] // retry 00:01:55.188 [Pipeline] sh 00:01:55.474 + tar --no-same-owner -xf dpdk_e7bc451c996b5882c5d8267725f3d88118009c75.tar.gz 00:01:56.875 [Pipeline] sh 00:01:57.161 + git -C dpdk log --oneline -n5 00:01:57.161 e7bc451c99 trace: disable traces at compilation 00:01:57.161 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:01:57.161 7268f21aa0 timer: improve TSC estimation accuracy 00:01:57.161 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:01:57.161 41b09d64e3 eal/x86: fix 32-bit write combining store 00:01:57.180 [Pipeline] writeFile 00:01:57.194 [Pipeline] sh 00:01:57.481 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:57.495 [Pipeline] sh 00:01:57.780 + cat autorun-spdk.conf 00:01:57.780 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.780 SPDK_TEST_NVME=1 00:01:57.780 SPDK_TEST_FTL=1 00:01:57.780 SPDK_TEST_ISAL=1 00:01:57.780 SPDK_RUN_ASAN=1 00:01:57.780 SPDK_RUN_UBSAN=1 00:01:57.780 SPDK_TEST_XNVME=1 00:01:57.780 SPDK_TEST_NVME_FDP=1 00:01:57.780 SPDK_TEST_NATIVE_DPDK=main 00:01:57.780 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.780 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.789 RUN_NIGHTLY=1 00:01:57.794 [Pipeline] } 00:01:57.807 [Pipeline] // stage 00:01:57.822 [Pipeline] stage 00:01:57.824 [Pipeline] { (Run VM) 00:01:57.836 [Pipeline] sh 00:01:58.122 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:58.122 + echo 'Start stage prepare_nvme.sh' 00:01:58.122 Start stage prepare_nvme.sh 00:01:58.122 + [[ -n 9 ]] 00:01:58.122 + disk_prefix=ex9 00:01:58.122 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:58.122 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:58.122 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:58.122 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:58.122 ++ SPDK_TEST_NVME=1 00:01:58.122 ++ SPDK_TEST_FTL=1 00:01:58.122 ++ SPDK_TEST_ISAL=1 00:01:58.122 ++ SPDK_RUN_ASAN=1 00:01:58.122 ++ SPDK_RUN_UBSAN=1 00:01:58.122 ++ SPDK_TEST_XNVME=1 00:01:58.122 ++ SPDK_TEST_NVME_FDP=1 00:01:58.122 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:58.122 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:58.122 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:58.122 ++ RUN_NIGHTLY=1 00:01:58.122 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:58.122 + nvme_files=() 00:01:58.122 + declare -A nvme_files 00:01:58.122 + backend_dir=/var/lib/libvirt/images/backends 00:01:58.122 + nvme_files['nvme.img']=5G 00:01:58.122 + nvme_files['nvme-cmb.img']=5G 00:01:58.122 + nvme_files['nvme-multi0.img']=4G 00:01:58.122 + nvme_files['nvme-multi1.img']=4G 00:01:58.122 + nvme_files['nvme-multi2.img']=4G 00:01:58.122 + nvme_files['nvme-openstack.img']=8G 00:01:58.122 + nvme_files['nvme-zns.img']=5G 00:01:58.122 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:58.122 + (( SPDK_TEST_FTL == 1 )) 00:01:58.122 + nvme_files["nvme-ftl.img"]=6G 00:01:58.122 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:58.122 + nvme_files["nvme-fdp.img"]=1G 00:01:58.122 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:58.122 + for nvme in "${!nvme_files[@]}" 00:01:58.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:58.122 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.122 + for nvme in "${!nvme_files[@]}" 00:01:58.122 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:58.400 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:58.400 + for nvme in "${!nvme_files[@]}" 00:01:58.400 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:58.400 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.400 + for nvme in "${!nvme_files[@]}" 00:01:58.400 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:58.661 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:58.661 + for nvme in "${!nvme_files[@]}" 00:01:58.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:58.661 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:58.661 + for nvme in "${!nvme_files[@]}" 00:01:58.661 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:58.922 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:58.922 + for nvme in "${!nvme_files[@]}" 00:01:58.922 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:59.184 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:59.184 + for nvme in "${!nvme_files[@]}" 00:01:59.184 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:59.184 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:59.184 + for nvme in "${!nvme_files[@]}" 00:01:59.184 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:59.445 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:59.445 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:59.445 + echo 'End stage prepare_nvme.sh' 00:01:59.445 End stage prepare_nvme.sh 00:01:59.458 [Pipeline] sh 00:01:59.743 + DISTRO=fedora39 00:01:59.743 + CPUS=10 00:01:59.743 + RAM=12288 00:01:59.743 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:59.743 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:59.743 00:01:59.743 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:59.743 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:59.743 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:59.743 HELP=0 00:01:59.743 DRY_RUN=0 00:01:59.743 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:59.743 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:59.743 NVME_AUTO_CREATE=0 00:01:59.743 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:59.743 NVME_CMB=,,,, 00:01:59.743 NVME_PMR=,,,, 00:01:59.743 NVME_ZNS=,,,, 00:01:59.743 NVME_MS=true,,,, 00:01:59.743 NVME_FDP=,,,on, 00:01:59.743 SPDK_VAGRANT_DISTRO=fedora39 00:01:59.743 SPDK_VAGRANT_VMCPU=10 00:01:59.743 SPDK_VAGRANT_VMRAM=12288 00:01:59.743 SPDK_VAGRANT_PROVIDER=libvirt 00:01:59.743 SPDK_VAGRANT_HTTP_PROXY= 00:01:59.743 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:59.743 SPDK_OPENSTACK_NETWORK=0 00:01:59.743 VAGRANT_PACKAGE_BOX=0 00:01:59.743 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:59.743 FORCE_DISTRO=true 00:01:59.743 VAGRANT_BOX_VERSION= 00:01:59.743 EXTRA_VAGRANTFILES= 00:01:59.743 NIC_MODEL=e1000 00:01:59.743 00:01:59.743 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:59.744 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:02.293 Bringing machine 'default' up with 'libvirt' provider... 00:02:02.555 ==> default: Creating image (snapshot of base box volume). 00:02:02.555 ==> default: Creating domain with the following settings... 00:02:02.555 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728411171_caa8bfb5b80a11db67a4 00:02:02.555 ==> default: -- Domain type: kvm 00:02:02.555 ==> default: -- Cpus: 10 00:02:02.555 ==> default: -- Feature: acpi 00:02:02.555 ==> default: -- Feature: apic 00:02:02.555 ==> default: -- Feature: pae 00:02:02.555 ==> default: -- Memory: 12288M 00:02:02.555 ==> default: -- Memory Backing: hugepages: 00:02:02.555 ==> default: -- Management MAC: 00:02:02.555 ==> default: -- Loader: 00:02:02.555 ==> default: -- Nvram: 00:02:02.555 ==> default: -- Base box: spdk/fedora39 00:02:02.555 ==> default: -- Storage pool: default 00:02:02.555 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728411171_caa8bfb5b80a11db67a4.img (20G) 00:02:02.555 ==> default: -- Volume Cache: default 00:02:02.555 ==> default: -- Kernel: 00:02:02.555 ==> default: -- Initrd: 00:02:02.555 ==> default: -- Graphics Type: vnc 00:02:02.555 ==> default: -- Graphics Port: -1 00:02:02.555 ==> default: -- Graphics IP: 127.0.0.1 00:02:02.555 ==> default: -- Graphics Password: Not defined 00:02:02.555 ==> default: -- Video Type: cirrus 00:02:02.555 ==> default: -- Video VRAM: 9216 00:02:02.555 ==> default: -- Sound Type: 00:02:02.555 ==> default: -- Keymap: en-us 00:02:02.555 ==> default: -- TPM Path: 00:02:02.555 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:02.555 ==> default: -- Command line args: 00:02:02.555 ==> default: -> value=-device, 00:02:02.555 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:02.555 ==> default: -> value=-drive, 00:02:02.555 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:02.555 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:02.816 ==> default: -> value=-drive, 00:02:02.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:02.816 ==> default: -> value=-drive, 00:02:02.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.816 ==> default: -> value=-drive, 00:02:02.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.816 ==> default: -> value=-drive, 00:02:02.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:02.816 ==> default: -> value=-drive, 00:02:02.816 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:02.816 ==> default: -> value=-device, 00:02:02.816 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:02.816 ==> default: Creating shared folders metadata... 00:02:02.816 ==> default: Starting domain. 00:02:04.733 ==> default: Waiting for domain to get an IP address... 00:02:22.896 ==> default: Waiting for SSH to become available... 00:02:22.896 ==> default: Configuring and enabling network interfaces... 00:02:25.446 default: SSH address: 192.168.121.234:22 00:02:25.446 default: SSH username: vagrant 00:02:25.446 default: SSH auth method: private key 00:02:28.006 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:36.155 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:42.741 ==> default: Mounting SSHFS shared folder... 00:02:44.681 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:44.681 ==> default: Checking Mount.. 00:02:45.624 ==> default: Folder Successfully Mounted! 00:02:45.624 00:02:45.624 SUCCESS! 00:02:45.624 00:02:45.624 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:45.624 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:45.624 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:45.624 00:02:45.635 [Pipeline] } 00:02:45.650 [Pipeline] // stage 00:02:45.659 [Pipeline] dir 00:02:45.659 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:45.661 [Pipeline] { 00:02:45.673 [Pipeline] catchError 00:02:45.674 [Pipeline] { 00:02:45.686 [Pipeline] sh 00:02:45.970 + vagrant ssh-config --host vagrant 00:02:45.970 + sed -ne '/^Host/,$p' 00:02:45.970 + tee ssh_conf 00:02:48.517 Host vagrant 00:02:48.517 HostName 192.168.121.234 00:02:48.517 User vagrant 00:02:48.517 Port 22 00:02:48.517 UserKnownHostsFile /dev/null 00:02:48.517 StrictHostKeyChecking no 00:02:48.517 PasswordAuthentication no 00:02:48.517 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:48.517 IdentitiesOnly yes 00:02:48.517 LogLevel FATAL 00:02:48.517 ForwardAgent yes 00:02:48.517 ForwardX11 yes 00:02:48.517 00:02:48.532 [Pipeline] withEnv 00:02:48.535 [Pipeline] { 00:02:48.548 [Pipeline] sh 00:02:48.863 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:48.863 source /etc/os-release 00:02:48.863 [[ -e /image.version ]] && img=$(< /image.version) 00:02:48.863 # Minimal, systemd-like check. 00:02:48.863 if [[ -e /.dockerenv ]]; then 00:02:48.863 # Clear garbage from the node'\''s name: 00:02:48.863 # agt-er_autotest_547-896 -> autotest_547-896 00:02:48.863 # $HOSTNAME is the actual container id 00:02:48.863 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:48.863 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:48.863 # We can assume this is a mount from a host where container is running, 00:02:48.863 # so fetch its hostname to easily identify the target swarm worker. 00:02:48.863 container="$(< /etc/hostname) ($agent)" 00:02:48.863 else 00:02:48.863 # Fallback 00:02:48.863 container=$agent 00:02:48.863 fi 00:02:48.863 fi 00:02:48.863 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:48.863 ' 00:02:48.877 [Pipeline] } 00:02:48.893 [Pipeline] // withEnv 00:02:48.901 [Pipeline] setCustomBuildProperty 00:02:48.917 [Pipeline] stage 00:02:48.919 [Pipeline] { (Tests) 00:02:48.936 [Pipeline] sh 00:02:49.222 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:49.499 [Pipeline] sh 00:02:49.783 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.059 [Pipeline] timeout 00:02:50.060 Timeout set to expire in 50 min 00:02:50.061 [Pipeline] { 00:02:50.075 [Pipeline] sh 00:02:50.361 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:50.934 HEAD is now at 92108e0a2 fsdev/aio: add support for null IOs 00:02:50.949 [Pipeline] sh 00:02:51.233 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:51.508 [Pipeline] sh 00:02:51.792 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.071 [Pipeline] sh 00:02:52.355 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:52.617 ++ readlink -f spdk_repo 00:02:52.617 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:52.617 + [[ -n /home/vagrant/spdk_repo ]] 00:02:52.617 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:52.617 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:52.617 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:52.617 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:52.617 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:52.617 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:52.617 + cd /home/vagrant/spdk_repo 00:02:52.617 + source /etc/os-release 00:02:52.617 ++ NAME='Fedora Linux' 00:02:52.617 ++ VERSION='39 (Cloud Edition)' 00:02:52.617 ++ ID=fedora 00:02:52.617 ++ VERSION_ID=39 00:02:52.617 ++ VERSION_CODENAME= 00:02:52.617 ++ PLATFORM_ID=platform:f39 00:02:52.617 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:52.617 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:52.617 ++ LOGO=fedora-logo-icon 00:02:52.617 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:52.617 ++ HOME_URL=https://fedoraproject.org/ 00:02:52.617 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:52.617 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:52.617 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:52.617 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:52.617 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:52.617 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:52.617 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:52.617 ++ SUPPORT_END=2024-11-12 00:02:52.617 ++ VARIANT='Cloud Edition' 00:02:52.617 ++ VARIANT_ID=cloud 00:02:52.617 + uname -a 00:02:52.617 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:52.617 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:52.878 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.140 Hugepages 00:02:53.140 node hugesize free / total 00:02:53.140 node0 1048576kB 0 / 0 00:02:53.401 node0 2048kB 0 / 0 00:02:53.401 00:02:53.401 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.401 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.401 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.401 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:53.401 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:53.401 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:53.401 + rm -f /tmp/spdk-ld-path 00:02:53.401 + source autorun-spdk.conf 00:02:53.401 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.401 ++ SPDK_TEST_NVME=1 00:02:53.401 ++ SPDK_TEST_FTL=1 00:02:53.401 ++ SPDK_TEST_ISAL=1 00:02:53.401 ++ SPDK_RUN_ASAN=1 00:02:53.401 ++ SPDK_RUN_UBSAN=1 00:02:53.401 ++ SPDK_TEST_XNVME=1 00:02:53.401 ++ SPDK_TEST_NVME_FDP=1 00:02:53.401 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:53.401 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.401 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.401 ++ RUN_NIGHTLY=1 00:02:53.401 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.401 + [[ -n '' ]] 00:02:53.401 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.401 + for M in /var/spdk/build-*-manifest.txt 00:02:53.401 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.401 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.401 + for M in /var/spdk/build-*-manifest.txt 00:02:53.401 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.401 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.401 + for M in /var/spdk/build-*-manifest.txt 00:02:53.401 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.401 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.401 ++ uname 00:02:53.401 + [[ Linux == \L\i\n\u\x ]] 00:02:53.401 + sudo dmesg -T 00:02:53.401 + sudo dmesg --clear 00:02:53.401 + dmesg_pid=5771 00:02:53.401 + [[ Fedora Linux == FreeBSD ]] 00:02:53.401 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.401 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.401 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.401 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.401 + sudo dmesg -Tw 00:02:53.401 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.401 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.401 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.401 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.401 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.662 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.662 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.662 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.662 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.662 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.662 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.662 Test configuration: 00:02:53.662 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.662 SPDK_TEST_NVME=1 00:02:53.662 SPDK_TEST_FTL=1 00:02:53.662 SPDK_TEST_ISAL=1 00:02:53.662 SPDK_RUN_ASAN=1 00:02:53.662 SPDK_RUN_UBSAN=1 00:02:53.662 SPDK_TEST_XNVME=1 00:02:53.662 SPDK_TEST_NVME_FDP=1 00:02:53.662 SPDK_TEST_NATIVE_DPDK=main 00:02:53.662 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:53.662 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.662 RUN_NIGHTLY=1 18:13:42 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:53.662 18:13:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.662 18:13:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:53.662 18:13:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:53.662 18:13:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.662 18:13:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.662 18:13:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.662 18:13:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.662 18:13:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.662 18:13:42 -- paths/export.sh@5 -- $ export PATH 00:02:53.662 18:13:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.662 18:13:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.662 18:13:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:53.662 18:13:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728411222.XXXXXX 00:02:53.662 18:13:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728411222.SuD24k 00:02:53.662 18:13:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:53.662 18:13:42 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:02:53.662 18:13:42 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:53.662 18:13:42 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:53.662 18:13:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:53.662 18:13:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:53.662 18:13:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:53.662 18:13:42 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:53.662 18:13:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.662 18:13:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:53.662 18:13:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:53.662 18:13:42 -- pm/common@17 -- $ local monitor 00:02:53.662 18:13:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.663 18:13:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.663 18:13:42 -- pm/common@25 -- $ sleep 1 00:02:53.663 18:13:42 -- pm/common@21 -- $ date +%s 00:02:53.663 18:13:42 -- pm/common@21 -- $ date +%s 00:02:53.663 18:13:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728411222 00:02:53.663 18:13:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728411222 00:02:53.663 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728411222_collect-vmstat.pm.log 00:02:53.663 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728411222_collect-cpu-load.pm.log 00:02:54.604 18:13:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:54.604 18:13:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:54.604 18:13:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:54.604 18:13:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.604 18:13:43 -- spdk/autobuild.sh@16 -- $ date -u 00:02:54.604 Tue Oct 8 06:13:43 PM UTC 2024 00:02:54.604 18:13:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:54.604 v25.01-pre-41-g92108e0a2 00:02:54.604 18:13:43 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:54.604 18:13:43 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:54.604 18:13:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:54.604 18:13:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.604 18:13:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.604 ************************************ 00:02:54.604 START TEST asan 00:02:54.604 ************************************ 00:02:54.604 using asan 00:02:54.604 18:13:43 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:54.604 00:02:54.604 real 0m0.000s 00:02:54.604 user 0m0.000s 00:02:54.604 sys 0m0.000s 00:02:54.604 18:13:43 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:54.604 ************************************ 00:02:54.604 END TEST asan 00:02:54.604 ************************************ 00:02:54.604 18:13:43 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.864 18:13:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:54.864 18:13:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:54.864 18:13:43 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:54.864 18:13:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.864 18:13:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.864 ************************************ 00:02:54.864 START TEST ubsan 00:02:54.864 ************************************ 00:02:54.864 using ubsan 00:02:54.864 18:13:43 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:54.864 00:02:54.864 real 0m0.000s 00:02:54.864 user 0m0.000s 00:02:54.864 sys 0m0.000s 00:02:54.864 18:13:43 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:54.864 ************************************ 00:02:54.864 END TEST ubsan 00:02:54.864 ************************************ 00:02:54.864 18:13:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.864 18:13:43 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:54.864 18:13:43 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:54.864 18:13:43 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:54.864 18:13:43 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:54.864 18:13:43 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:54.864 18:13:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.864 ************************************ 00:02:54.864 START TEST build_native_dpdk 00:02:54.864 ************************************ 00:02:54.864 18:13:43 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:54.864 e7bc451c99 trace: disable traces at compilation 00:02:54.864 dbdf3d5581 timer: override CPU TSC frequency with OS value 00:02:54.864 7268f21aa0 timer: improve TSC estimation accuracy 00:02:54.864 8df71650e9 drivers: remove more redundant newline in Marvell drivers 00:02:54.864 41b09d64e3 eal/x86: fix 32-bit write combining store 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:54.864 patching file config/rte_config.h 00:02:54.864 Hunk #1 succeeded at 71 (offset 12 lines). 00:02:54.864 18:13:43 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:02:54.864 18:13:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:54.865 18:13:43 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:54.865 patching file drivers/bus/pci/linux/pci_uio.c 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:54.865 18:13:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:59.076 The Meson build system 00:02:59.076 Version: 1.5.0 00:02:59.076 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:59.076 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:59.076 Build type: native build 00:02:59.076 Program cat found: YES (/usr/bin/cat) 00:02:59.076 Project name: DPDK 00:02:59.076 Project version: 24.11.0-rc0 00:02:59.076 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.076 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:59.076 Host machine cpu family: x86_64 00:02:59.076 Host machine cpu: x86_64 00:02:59.076 Message: ## Building in Developer Mode ## 00:02:59.076 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:59.076 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:59.076 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:59.076 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:02:59.076 Program cat found: YES (/usr/bin/cat) 00:02:59.076 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:59.076 Compiler for C supports arguments -march=native: YES 00:02:59.076 Checking for size of "void *" : 8 00:02:59.076 Checking for size of "void *" : 8 (cached) 00:02:59.076 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:59.076 Library m found: YES 00:02:59.076 Library numa found: YES 00:02:59.076 Has header "numaif.h" : YES 00:02:59.076 Library fdt found: NO 00:02:59.076 Library execinfo found: NO 00:02:59.076 Has header "execinfo.h" : YES 00:02:59.076 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.076 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:59.076 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:59.076 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:59.076 Run-time dependency openssl found: YES 3.1.1 00:02:59.076 Run-time dependency libpcap found: YES 1.10.4 00:02:59.076 Has header "pcap.h" with dependency libpcap: YES 00:02:59.076 Compiler for C supports arguments -Wcast-qual: YES 00:02:59.076 Compiler for C supports arguments -Wdeprecated: YES 00:02:59.076 Compiler for C supports arguments -Wformat: YES 00:02:59.076 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:59.076 Compiler for C supports arguments -Wformat-security: NO 00:02:59.076 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.076 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:59.076 Compiler for C supports arguments -Wnested-externs: YES 00:02:59.076 Compiler for C supports arguments -Wold-style-definition: YES 00:02:59.076 Compiler for C supports arguments -Wpointer-arith: YES 00:02:59.076 Compiler for C supports arguments -Wsign-compare: YES 00:02:59.076 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:59.076 Compiler for C supports arguments -Wundef: YES 00:02:59.076 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.076 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:59.076 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:59.076 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.076 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:59.076 Program objdump found: YES (/usr/bin/objdump) 00:02:59.076 Compiler for C supports arguments -mavx512f: YES 00:02:59.076 Checking if "AVX512 checking" compiles: YES 00:02:59.076 Fetching value of define "__SSE4_2__" : 1 00:02:59.076 Fetching value of define "__AES__" : 1 00:02:59.076 Fetching value of define "__AVX__" : 1 00:02:59.076 Fetching value of define "__AVX2__" : 1 00:02:59.077 Fetching value of define "__AVX512BW__" : 1 00:02:59.077 Fetching value of define "__AVX512CD__" : 1 00:02:59.077 Fetching value of define "__AVX512DQ__" : 1 00:02:59.077 Fetching value of define "__AVX512F__" : 1 00:02:59.077 Fetching value of define "__AVX512VL__" : 1 00:02:59.077 Fetching value of define "__PCLMUL__" : 1 00:02:59.077 Fetching value of define "__RDRND__" : 1 00:02:59.077 Fetching value of define "__RDSEED__" : 1 00:02:59.077 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:59.077 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:59.077 Message: lib/log: Defining dependency "log" 00:02:59.077 Message: lib/kvargs: Defining dependency "kvargs" 00:02:59.077 Message: lib/argparse: Defining dependency "argparse" 00:02:59.077 Message: lib/telemetry: Defining dependency "telemetry" 00:02:59.077 Checking for function "getentropy" : NO 00:02:59.077 Message: lib/eal: Defining dependency "eal" 00:02:59.077 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:02:59.077 Message: lib/ring: Defining dependency "ring" 00:02:59.077 Message: lib/rcu: Defining dependency "rcu" 00:02:59.077 Message: lib/mempool: Defining dependency "mempool" 00:02:59.077 Message: lib/mbuf: Defining dependency "mbuf" 00:02:59.077 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.077 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:59.077 Compiler for C supports arguments -mpclmul: YES 00:02:59.077 Compiler for C supports arguments -maes: YES 00:02:59.077 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:59.077 Compiler for C supports arguments -mavx512bw: YES 00:02:59.077 Compiler for C supports arguments -mavx512dq: YES 00:02:59.077 Compiler for C supports arguments -mavx512vl: YES 00:02:59.077 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:59.077 Compiler for C supports arguments -mavx2: YES 00:02:59.077 Compiler for C supports arguments -mavx: YES 00:02:59.077 Message: lib/net: Defining dependency "net" 00:02:59.077 Message: lib/meter: Defining dependency "meter" 00:02:59.077 Message: lib/ethdev: Defining dependency "ethdev" 00:02:59.077 Message: lib/pci: Defining dependency "pci" 00:02:59.077 Message: lib/cmdline: Defining dependency "cmdline" 00:02:59.077 Message: lib/metrics: Defining dependency "metrics" 00:02:59.077 Message: lib/hash: Defining dependency "hash" 00:02:59.077 Message: lib/timer: Defining dependency "timer" 00:02:59.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.077 Message: lib/acl: Defining dependency "acl" 00:02:59.077 Message: lib/bbdev: Defining dependency "bbdev" 00:02:59.077 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:59.077 Run-time dependency libelf found: YES 0.191 00:02:59.077 Message: lib/bpf: Defining dependency "bpf" 00:02:59.077 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:59.077 Message: lib/compressdev: Defining dependency "compressdev" 00:02:59.077 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:59.077 Message: lib/distributor: Defining dependency "distributor" 00:02:59.077 Message: lib/dmadev: Defining dependency "dmadev" 00:02:59.077 Message: lib/efd: Defining dependency "efd" 00:02:59.077 Message: lib/eventdev: Defining dependency "eventdev" 00:02:59.077 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:59.077 Message: lib/gpudev: Defining dependency "gpudev" 00:02:59.077 Message: lib/gro: Defining dependency "gro" 00:02:59.077 Message: lib/gso: Defining dependency "gso" 00:02:59.077 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:59.077 Message: lib/jobstats: Defining dependency "jobstats" 00:02:59.077 Message: lib/latencystats: Defining dependency "latencystats" 00:02:59.077 Message: lib/lpm: Defining dependency "lpm" 00:02:59.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512IFMA__" : 1 00:02:59.077 Message: lib/member: Defining dependency "member" 00:02:59.077 Message: lib/pcapng: Defining dependency "pcapng" 00:02:59.077 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:59.077 Message: lib/power: Defining dependency "power" 00:02:59.077 Message: lib/rawdev: Defining dependency "rawdev" 00:02:59.077 Message: lib/regexdev: Defining dependency "regexdev" 00:02:59.077 Message: lib/mldev: Defining dependency "mldev" 00:02:59.077 Message: lib/rib: Defining dependency "rib" 00:02:59.077 Message: lib/reorder: Defining dependency "reorder" 00:02:59.077 Message: lib/sched: Defining dependency "sched" 00:02:59.077 Message: lib/security: Defining dependency "security" 00:02:59.077 Message: lib/stack: Defining dependency "stack" 00:02:59.077 Has header "linux/userfaultfd.h" : YES 00:02:59.077 Has header "linux/vduse.h" : YES 00:02:59.077 Message: lib/vhost: Defining dependency "vhost" 00:02:59.077 Message: lib/ipsec: Defining dependency "ipsec" 00:02:59.077 Message: lib/pdcp: Defining dependency "pdcp" 00:02:59.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:59.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:59.077 Message: lib/fib: Defining dependency "fib" 00:02:59.077 Message: lib/port: Defining dependency "port" 00:02:59.077 Message: lib/pdump: Defining dependency "pdump" 00:02:59.077 Message: lib/table: Defining dependency "table" 00:02:59.077 Message: lib/pipeline: Defining dependency "pipeline" 00:02:59.077 Message: lib/graph: Defining dependency "graph" 00:02:59.077 Message: lib/node: Defining dependency "node" 00:02:59.077 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:59.077 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:59.077 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:59.077 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:00.975 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:00.975 Compiler for C supports arguments -Wno-unused-value: YES 00:03:00.975 Compiler for C supports arguments -Wno-format: YES 00:03:00.975 Compiler for C supports arguments -Wno-format-security: YES 00:03:00.975 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:00.975 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:00.975 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:00.975 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:00.975 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:00.975 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:00.975 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:00.975 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:00.975 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:00.975 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:00.975 Has header "sys/epoll.h" : YES 00:03:00.975 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:00.975 Configuring doxy-api-html.conf using configuration 00:03:00.975 Configuring doxy-api-man.conf using configuration 00:03:00.975 Program mandb found: YES (/usr/bin/mandb) 00:03:00.975 Program sphinx-build found: NO 00:03:00.975 Configuring rte_build_config.h using configuration 00:03:00.975 Message: 00:03:00.975 ================= 00:03:00.975 Applications Enabled 00:03:00.975 ================= 00:03:00.975 00:03:00.975 apps: 00:03:00.975 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:00.975 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:00.975 test-pmd, test-regex, test-sad, test-security-perf, 00:03:00.975 00:03:00.975 Message: 00:03:00.975 ================= 00:03:00.975 Libraries Enabled 00:03:00.975 ================= 00:03:00.975 00:03:00.975 libs: 00:03:00.975 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:00.975 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:00.975 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:00.976 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:00.976 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:00.976 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:00.976 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:00.976 graph, node, 00:03:00.976 00:03:00.976 Message: 00:03:00.976 =============== 00:03:00.976 Drivers Enabled 00:03:00.976 =============== 00:03:00.976 00:03:00.976 common: 00:03:00.976 00:03:00.976 bus: 00:03:00.976 pci, vdev, 00:03:00.976 mempool: 00:03:00.976 ring, 00:03:00.976 dma: 00:03:00.976 00:03:00.976 net: 00:03:00.976 i40e, 00:03:00.976 raw: 00:03:00.976 00:03:00.976 crypto: 00:03:00.976 00:03:00.976 compress: 00:03:00.976 00:03:00.976 regex: 00:03:00.976 00:03:00.976 ml: 00:03:00.976 00:03:00.976 vdpa: 00:03:00.976 00:03:00.976 event: 00:03:00.976 00:03:00.976 baseband: 00:03:00.976 00:03:00.976 gpu: 00:03:00.976 00:03:00.976 00:03:00.976 Message: 00:03:00.976 ================= 00:03:00.976 Content Skipped 00:03:00.976 ================= 00:03:00.976 00:03:00.976 apps: 00:03:00.976 00:03:00.976 libs: 00:03:00.976 00:03:00.976 drivers: 00:03:00.976 common/cpt: not in enabled drivers build config 00:03:00.976 common/dpaax: not in enabled drivers build config 00:03:00.976 common/iavf: not in enabled drivers build config 00:03:00.976 common/idpf: not in enabled drivers build config 00:03:00.976 common/ionic: not in enabled drivers build config 00:03:00.976 common/mvep: not in enabled drivers build config 00:03:00.976 common/octeontx: not in enabled drivers build config 00:03:00.976 bus/auxiliary: not in enabled drivers build config 00:03:00.976 bus/cdx: not in enabled drivers build config 00:03:00.976 bus/dpaa: not in enabled drivers build config 00:03:00.976 bus/fslmc: not in enabled drivers build config 00:03:00.976 bus/ifpga: not in enabled drivers build config 00:03:00.976 bus/platform: not in enabled drivers build config 00:03:00.976 bus/uacce: not in enabled drivers build config 00:03:00.976 bus/vmbus: not in enabled drivers build config 00:03:00.976 common/cnxk: not in enabled drivers build config 00:03:00.976 common/mlx5: not in enabled drivers build config 00:03:00.976 common/nfp: not in enabled drivers build config 00:03:00.976 common/nitrox: not in enabled drivers build config 00:03:00.976 common/qat: not in enabled drivers build config 00:03:00.976 common/sfc_efx: not in enabled drivers build config 00:03:00.976 mempool/bucket: not in enabled drivers build config 00:03:00.976 mempool/cnxk: not in enabled drivers build config 00:03:00.976 mempool/dpaa: not in enabled drivers build config 00:03:00.976 mempool/dpaa2: not in enabled drivers build config 00:03:00.976 mempool/octeontx: not in enabled drivers build config 00:03:00.976 mempool/stack: not in enabled drivers build config 00:03:00.976 dma/cnxk: not in enabled drivers build config 00:03:00.976 dma/dpaa: not in enabled drivers build config 00:03:00.976 dma/dpaa2: not in enabled drivers build config 00:03:00.976 dma/hisilicon: not in enabled drivers build config 00:03:00.976 dma/idxd: not in enabled drivers build config 00:03:00.976 dma/ioat: not in enabled drivers build config 00:03:00.976 dma/odm: not in enabled drivers build config 00:03:00.976 dma/skeleton: not in enabled drivers build config 00:03:00.976 net/af_packet: not in enabled drivers build config 00:03:00.976 net/af_xdp: not in enabled drivers build config 00:03:00.976 net/ark: not in enabled drivers build config 00:03:00.976 net/atlantic: not in enabled drivers build config 00:03:00.976 net/avp: not in enabled drivers build config 00:03:00.976 net/axgbe: not in enabled drivers build config 00:03:00.976 net/bnx2x: not in enabled drivers build config 00:03:00.976 net/bnxt: not in enabled drivers build config 00:03:00.976 net/bonding: not in enabled drivers build config 00:03:00.976 net/cnxk: not in enabled drivers build config 00:03:00.976 net/cpfl: not in enabled drivers build config 00:03:00.976 net/cxgbe: not in enabled drivers build config 00:03:00.976 net/dpaa: not in enabled drivers build config 00:03:00.976 net/dpaa2: not in enabled drivers build config 00:03:00.976 net/e1000: not in enabled drivers build config 00:03:00.976 net/ena: not in enabled drivers build config 00:03:00.976 net/enetc: not in enabled drivers build config 00:03:00.976 net/enetfec: not in enabled drivers build config 00:03:00.976 net/enic: not in enabled drivers build config 00:03:00.976 net/failsafe: not in enabled drivers build config 00:03:00.976 net/fm10k: not in enabled drivers build config 00:03:00.976 net/gve: not in enabled drivers build config 00:03:00.976 net/hinic: not in enabled drivers build config 00:03:00.976 net/hns3: not in enabled drivers build config 00:03:00.976 net/iavf: not in enabled drivers build config 00:03:00.976 net/ice: not in enabled drivers build config 00:03:00.976 net/idpf: not in enabled drivers build config 00:03:00.976 net/igc: not in enabled drivers build config 00:03:00.976 net/ionic: not in enabled drivers build config 00:03:00.976 net/ipn3ke: not in enabled drivers build config 00:03:00.976 net/ixgbe: not in enabled drivers build config 00:03:00.976 net/mana: not in enabled drivers build config 00:03:00.976 net/memif: not in enabled drivers build config 00:03:00.976 net/mlx4: not in enabled drivers build config 00:03:00.976 net/mlx5: not in enabled drivers build config 00:03:00.976 net/mvneta: not in enabled drivers build config 00:03:00.976 net/mvpp2: not in enabled drivers build config 00:03:00.976 net/netvsc: not in enabled drivers build config 00:03:00.976 net/nfb: not in enabled drivers build config 00:03:00.976 net/nfp: not in enabled drivers build config 00:03:00.976 net/ngbe: not in enabled drivers build config 00:03:00.976 net/ntnic: not in enabled drivers build config 00:03:00.976 net/null: not in enabled drivers build config 00:03:00.976 net/octeontx: not in enabled drivers build config 00:03:00.976 net/octeon_ep: not in enabled drivers build config 00:03:00.976 net/pcap: not in enabled drivers build config 00:03:00.976 net/pfe: not in enabled drivers build config 00:03:00.976 net/qede: not in enabled drivers build config 00:03:00.976 net/ring: not in enabled drivers build config 00:03:00.976 net/sfc: not in enabled drivers build config 00:03:00.976 net/softnic: not in enabled drivers build config 00:03:00.976 net/tap: not in enabled drivers build config 00:03:00.976 net/thunderx: not in enabled drivers build config 00:03:00.976 net/txgbe: not in enabled drivers build config 00:03:00.976 net/vdev_netvsc: not in enabled drivers build config 00:03:00.976 net/vhost: not in enabled drivers build config 00:03:00.976 net/virtio: not in enabled drivers build config 00:03:00.976 net/vmxnet3: not in enabled drivers build config 00:03:00.976 raw/cnxk_bphy: not in enabled drivers build config 00:03:00.976 raw/cnxk_gpio: not in enabled drivers build config 00:03:00.976 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:00.976 raw/ifpga: not in enabled drivers build config 00:03:00.976 raw/ntb: not in enabled drivers build config 00:03:00.976 raw/skeleton: not in enabled drivers build config 00:03:00.976 crypto/armv8: not in enabled drivers build config 00:03:00.976 crypto/bcmfs: not in enabled drivers build config 00:03:00.976 crypto/caam_jr: not in enabled drivers build config 00:03:00.976 crypto/ccp: not in enabled drivers build config 00:03:00.976 crypto/cnxk: not in enabled drivers build config 00:03:00.976 crypto/dpaa_sec: not in enabled drivers build config 00:03:00.976 crypto/dpaa2_sec: not in enabled drivers build config 00:03:00.976 crypto/ionic: not in enabled drivers build config 00:03:00.976 crypto/ipsec_mb: not in enabled drivers build config 00:03:00.976 crypto/mlx5: not in enabled drivers build config 00:03:00.976 crypto/mvsam: not in enabled drivers build config 00:03:00.976 crypto/nitrox: not in enabled drivers build config 00:03:00.976 crypto/null: not in enabled drivers build config 00:03:00.976 crypto/octeontx: not in enabled drivers build config 00:03:00.976 crypto/openssl: not in enabled drivers build config 00:03:00.976 crypto/scheduler: not in enabled drivers build config 00:03:00.976 crypto/uadk: not in enabled drivers build config 00:03:00.976 crypto/virtio: not in enabled drivers build config 00:03:00.976 compress/isal: not in enabled drivers build config 00:03:00.976 compress/mlx5: not in enabled drivers build config 00:03:00.976 compress/nitrox: not in enabled drivers build config 00:03:00.976 compress/octeontx: not in enabled drivers build config 00:03:00.976 compress/uadk: not in enabled drivers build config 00:03:00.976 compress/zlib: not in enabled drivers build config 00:03:00.976 regex/mlx5: not in enabled drivers build config 00:03:00.976 regex/cn9k: not in enabled drivers build config 00:03:00.976 ml/cnxk: not in enabled drivers build config 00:03:00.976 vdpa/ifc: not in enabled drivers build config 00:03:00.976 vdpa/mlx5: not in enabled drivers build config 00:03:00.976 vdpa/nfp: not in enabled drivers build config 00:03:00.976 vdpa/sfc: not in enabled drivers build config 00:03:00.976 event/cnxk: not in enabled drivers build config 00:03:00.976 event/dlb2: not in enabled drivers build config 00:03:00.976 event/dpaa: not in enabled drivers build config 00:03:00.976 event/dpaa2: not in enabled drivers build config 00:03:00.976 event/dsw: not in enabled drivers build config 00:03:00.976 event/opdl: not in enabled drivers build config 00:03:00.976 event/skeleton: not in enabled drivers build config 00:03:00.976 event/sw: not in enabled drivers build config 00:03:00.976 event/octeontx: not in enabled drivers build config 00:03:00.976 baseband/acc: not in enabled drivers build config 00:03:00.976 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:00.976 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:00.976 baseband/la12xx: not in enabled drivers build config 00:03:00.976 baseband/null: not in enabled drivers build config 00:03:00.976 baseband/turbo_sw: not in enabled drivers build config 00:03:00.976 gpu/cuda: not in enabled drivers build config 00:03:00.976 00:03:00.976 00:03:00.976 Build targets in project: 219 00:03:00.976 00:03:00.976 DPDK 24.11.0-rc0 00:03:00.976 00:03:00.976 User defined options 00:03:00.976 libdir : lib 00:03:00.976 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:00.976 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:00.976 c_link_args : 00:03:00.976 enable_docs : false 00:03:00.976 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:00.976 enable_kmods : false 00:03:00.976 machine : native 00:03:00.976 tests : false 00:03:00.976 00:03:00.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:00.976 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:00.977 18:13:49 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:00.977 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:00.977 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:00.977 [2/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:00.977 [3/718] Linking static target lib/librte_kvargs.a 00:03:00.977 [4/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:00.977 [5/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:00.977 [6/718] Linking static target lib/librte_log.a 00:03:00.977 [7/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:00.977 [8/718] Linking static target lib/librte_argparse.a 00:03:00.977 [9/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.977 [10/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:00.977 [11/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:00.977 [12/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:01.234 [13/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:01.234 [14/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:01.234 [15/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:01.234 [16/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:01.234 [17/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.234 [18/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.234 [19/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:01.234 [20/718] Linking target lib/librte_log.so.25.0 00:03:01.492 [21/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:01.492 [22/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:01.492 [23/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:01.492 [24/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:01.492 [25/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:01.492 [26/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:01.492 [27/718] Linking static target lib/librte_telemetry.a 00:03:01.492 [28/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:01.492 [29/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:01.750 [30/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:01.750 [31/718] Linking target lib/librte_kvargs.so.25.0 00:03:01.750 [32/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:01.750 [33/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:01.750 [34/718] Linking target lib/librte_argparse.so.25.0 00:03:01.750 [35/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:01.750 [36/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:02.010 [37/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.010 [38/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:02.010 [39/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:02.010 [40/718] Linking target lib/librte_telemetry.so.25.0 00:03:02.010 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:02.010 [42/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:02.010 [43/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:02.010 [44/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:02.010 [45/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:02.010 [46/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:02.010 [47/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:02.010 [48/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:02.268 [49/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:02.268 [50/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:02.268 [51/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:02.268 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:02.525 [53/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:02.525 [54/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:02.525 [55/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:02.525 [56/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:02.525 [57/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:02.525 [58/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:02.525 [59/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:02.525 [60/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:02.525 [61/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:02.783 [62/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:02.783 [63/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:02.783 [64/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:02.783 [65/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:02.783 [66/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:02.783 [67/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:02.783 [68/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:02.783 [69/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:02.783 [70/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:03.041 [71/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:03.041 [72/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:03.041 [73/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:03.041 [74/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:03.041 [75/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:03.041 [76/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:03.041 [77/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:03.041 [78/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:03.041 [79/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:03.299 [80/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:03.299 [81/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:03.299 [82/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:03.299 [83/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:03.299 [84/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:03.299 [85/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:03.557 [86/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:03.557 [87/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:03.557 [88/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:03.557 [89/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:03.557 [90/718] Linking static target lib/librte_ring.a 00:03:03.557 [91/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:03.557 [92/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:03.814 [93/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:03.814 [94/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.814 [95/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:03.814 [96/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:03.814 [97/718] Linking static target lib/librte_eal.a 00:03:03.814 [98/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:04.072 [99/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:04.072 [100/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:04.072 [101/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:04.072 [102/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:04.072 [103/718] Linking static target lib/librte_rcu.a 00:03:04.072 [104/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:04.072 [105/718] Linking static target lib/librte_mempool.a 00:03:04.072 [106/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:04.072 [107/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:04.329 [108/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:04.329 [109/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.329 [110/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:04.329 [111/718] Linking static target lib/librte_meter.a 00:03:04.329 [112/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:04.329 [113/718] Linking static target lib/librte_mbuf.a 00:03:04.329 [114/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:04.329 [115/718] Linking static target lib/librte_net.a 00:03:04.591 [116/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.591 [117/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:04.591 [118/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:04.591 [119/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:04.591 [120/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.591 [121/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:04.591 [122/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.851 [123/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.108 [124/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:05.108 [125/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:05.108 [126/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:05.108 [127/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:05.108 [128/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:05.108 [129/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:05.108 [130/718] Linking static target lib/librte_pci.a 00:03:05.108 [131/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:05.365 [132/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:05.365 [133/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.365 [134/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:05.365 [135/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:05.365 [136/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:05.365 [137/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:05.623 [138/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:05.623 [139/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:05.623 [140/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:05.623 [141/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:05.623 [142/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:05.623 [143/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:05.623 [144/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:05.623 [145/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:05.623 [146/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:05.623 [147/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:05.623 [148/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:05.881 [149/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:05.881 [150/718] Linking static target lib/librte_cmdline.a 00:03:05.881 [151/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:05.881 [152/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:05.881 [153/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:05.881 [154/718] Linking static target lib/librte_metrics.a 00:03:05.881 [155/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:06.139 [156/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:06.139 [157/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:06.139 [158/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.423 [159/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:06.423 [160/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.423 [161/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:06.423 [162/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:06.423 [163/718] Linking static target lib/librte_timer.a 00:03:06.684 [164/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:06.684 [165/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:06.684 [166/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.684 [167/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:06.941 [168/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:06.941 [169/718] Linking static target lib/librte_bitratestats.a 00:03:07.199 [170/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:07.199 [171/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:07.199 [172/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:07.199 [173/718] Linking static target lib/librte_bbdev.a 00:03:07.199 [174/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.457 [175/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:07.457 [176/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:07.457 [177/718] Linking static target lib/librte_hash.a 00:03:07.457 [178/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:07.457 [179/718] Linking static target lib/acl/libavx2_tmp.a 00:03:07.457 [180/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:07.457 [181/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:07.714 [182/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.714 [183/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:07.714 [184/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:07.972 [185/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.972 [186/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:07.972 [187/718] Linking static target lib/librte_ethdev.a 00:03:07.972 [188/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:07.972 [189/718] Linking target lib/librte_eal.so.25.0 00:03:07.972 [190/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.972 [191/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:07.972 [192/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:07.972 [193/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:07.972 [194/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:07.972 [195/718] Linking target lib/librte_ring.so.25.0 00:03:07.972 [196/718] Linking target lib/librte_meter.so.25.0 00:03:07.972 [197/718] Linking target lib/librte_pci.so.25.0 00:03:08.230 [198/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:08.230 [199/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:08.230 [200/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:08.230 [201/718] Linking target lib/librte_rcu.so.25.0 00:03:08.230 [202/718] Linking target lib/librte_mempool.so.25.0 00:03:08.230 [203/718] Linking target lib/librte_timer.so.25.0 00:03:08.230 [204/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:08.230 [205/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:08.230 [206/718] Linking static target lib/librte_cfgfile.a 00:03:08.230 [207/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:08.230 [208/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:08.230 [209/718] Linking target lib/librte_mbuf.so.25.0 00:03:08.230 [210/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:08.230 [211/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:08.488 [212/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:08.488 [213/718] Linking target lib/librte_net.so.25.0 00:03:08.488 [214/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:08.488 [215/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:08.488 [216/718] Linking target lib/librte_bbdev.so.25.0 00:03:08.488 [217/718] Linking static target lib/librte_bpf.a 00:03:08.488 [218/718] Linking static target lib/librte_compressdev.a 00:03:08.488 [219/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:08.488 [220/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.488 [221/718] Linking target lib/librte_cmdline.so.25.0 00:03:08.488 [222/718] Linking target lib/librte_hash.so.25.0 00:03:08.488 [223/718] Linking target lib/librte_cfgfile.so.25.0 00:03:08.746 [224/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.746 [225/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:08.746 [226/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:08.746 [227/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.746 [228/718] Linking target lib/librte_compressdev.so.25.0 00:03:08.746 [229/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:09.004 [230/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:09.004 [231/718] Linking static target lib/librte_acl.a 00:03:09.004 [232/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:09.004 [233/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:09.004 [234/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:09.004 [235/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:09.004 [236/718] Linking static target lib/librte_distributor.a 00:03:09.004 [237/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.262 [238/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:09.262 [239/718] Linking target lib/librte_acl.so.25.0 00:03:09.262 [240/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.262 [241/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:09.262 [242/718] Linking target lib/librte_distributor.so.25.0 00:03:09.520 [243/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.520 [244/718] Linking static target lib/librte_dmadev.a 00:03:09.520 [245/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:09.520 [246/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:09.520 [247/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:09.777 [248/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.777 [249/718] Linking target lib/librte_dmadev.so.25.0 00:03:09.777 [250/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:09.777 [251/718] Linking static target lib/librte_efd.a 00:03:09.777 [252/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:09.777 [253/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:10.035 [254/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:10.036 [255/718] Linking static target lib/librte_cryptodev.a 00:03:10.036 [256/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.036 [257/718] Linking target lib/librte_efd.so.25.0 00:03:10.294 [258/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:10.294 [259/718] Linking static target lib/librte_dispatcher.a 00:03:10.294 [260/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:10.294 [261/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:10.294 [262/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:10.294 [263/718] Linking static target lib/librte_gpudev.a 00:03:10.552 [264/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.552 [265/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:10.552 [266/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:10.840 [267/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:10.840 [268/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:10.840 [269/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:10.840 [270/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.840 [271/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.840 [272/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:10.840 [273/718] Linking target lib/librte_cryptodev.so.25.0 00:03:10.840 [274/718] Linking target lib/librte_gpudev.so.25.0 00:03:11.105 [275/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:11.105 [276/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:11.105 [277/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:11.105 [278/718] Linking static target lib/librte_gro.a 00:03:11.105 [279/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:11.105 [280/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:11.105 [281/718] Linking static target lib/librte_eventdev.a 00:03:11.105 [282/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:11.105 [283/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:11.105 [284/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:11.364 [285/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.364 [286/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:11.364 [287/718] Linking static target lib/librte_gso.a 00:03:11.364 [288/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.364 [289/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:11.364 [290/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:11.364 [291/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:11.622 [292/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:11.622 [293/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:11.622 [294/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:11.622 [295/718] Linking static target lib/librte_jobstats.a 00:03:11.622 [296/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:11.622 [297/718] Linking static target lib/librte_ip_frag.a 00:03:11.880 [298/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.880 [299/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.880 [300/718] Linking target lib/librte_ethdev.so.25.0 00:03:11.880 [301/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:11.880 [302/718] Linking target lib/librte_jobstats.so.25.0 00:03:11.880 [303/718] Linking static target lib/librte_latencystats.a 00:03:11.880 [304/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.880 [305/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:11.880 [306/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:11.880 [307/718] Linking target lib/librte_metrics.so.25.0 00:03:11.880 [308/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.139 [309/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:12.139 [310/718] Linking target lib/librte_bpf.so.25.0 00:03:12.139 [311/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:12.139 [312/718] Linking target lib/librte_gro.so.25.0 00:03:12.139 [313/718] Linking target lib/librte_bitratestats.so.25.0 00:03:12.139 [314/718] Linking target lib/librte_gso.so.25.0 00:03:12.139 [315/718] Linking target lib/librte_ip_frag.so.25.0 00:03:12.139 [316/718] Linking target lib/librte_latencystats.so.25.0 00:03:12.139 [317/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:12.139 [318/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:12.139 [319/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:12.139 [320/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:12.139 [321/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:12.396 [322/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:12.396 [323/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:12.396 [324/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:12.396 [325/718] Linking static target lib/librte_lpm.a 00:03:12.396 [326/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:12.396 [327/718] Linking static target lib/librte_pcapng.a 00:03:12.656 [328/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:12.656 [329/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.656 [330/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:12.656 [331/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:12.656 [332/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.656 [333/718] Linking target lib/librte_pcapng.so.25.0 00:03:12.656 [334/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:12.656 [335/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:12.656 [336/718] Linking target lib/librte_lpm.so.25.0 00:03:12.656 [337/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:12.656 [338/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.656 [339/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:12.656 [340/718] Linking target lib/librte_eventdev.so.25.0 00:03:12.656 [341/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:12.914 [342/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:12.914 [343/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:12.914 [344/718] Linking target lib/librte_dispatcher.so.25.0 00:03:12.914 [345/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:12.914 [346/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:12.914 [347/718] Linking static target lib/librte_member.a 00:03:12.914 [348/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:12.914 [349/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:12.914 [350/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:12.914 [351/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:12.914 [352/718] Linking static target lib/librte_power.a 00:03:12.914 [353/718] Linking static target lib/librte_regexdev.a 00:03:13.173 [354/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:13.173 [355/718] Linking static target lib/librte_rawdev.a 00:03:13.173 [356/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.173 [357/718] Linking target lib/librte_member.so.25.0 00:03:13.173 [358/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:13.173 [359/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:13.431 [360/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:13.431 [361/718] Linking static target lib/librte_mldev.a 00:03:13.431 [362/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:13.431 [363/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:13.431 [364/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:13.431 [365/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.431 [366/718] Linking static target lib/librte_reorder.a 00:03:13.431 [367/718] Linking target lib/librte_rawdev.so.25.0 00:03:13.431 [368/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:13.431 [369/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.431 [370/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:13.432 [371/718] Linking target lib/librte_power.so.25.0 00:03:13.432 [372/718] Linking static target lib/librte_rib.a 00:03:13.690 [373/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:13.690 [374/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.690 [375/718] Linking target lib/librte_regexdev.so.25.0 00:03:13.690 [376/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.690 [377/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:13.690 [378/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:13.690 [379/718] Linking target lib/librte_reorder.so.25.0 00:03:13.948 [380/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:13.948 [381/718] Linking static target lib/librte_security.a 00:03:13.948 [382/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:13.948 [383/718] Linking static target lib/librte_stack.a 00:03:13.948 [384/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:13.948 [385/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.948 [386/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:13.948 [387/718] Linking target lib/librte_rib.so.25.0 00:03:13.948 [388/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.948 [389/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:13.948 [390/718] Linking target lib/librte_stack.so.25.0 00:03:14.215 [391/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:14.216 [392/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:14.216 [393/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.216 [394/718] Linking target lib/librte_security.so.25.0 00:03:14.216 [395/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:14.477 [396/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.477 [397/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:14.477 [398/718] Linking target lib/librte_mldev.so.25.0 00:03:14.477 [399/718] Linking static target lib/librte_sched.a 00:03:14.477 [400/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:14.477 [401/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:14.735 [402/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:14.735 [403/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.735 [404/718] Linking target lib/librte_sched.so.25.0 00:03:14.735 [405/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:14.735 [406/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:14.735 [407/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:14.735 [408/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:14.994 [409/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:14.994 [410/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:14.994 [411/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:15.252 [412/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:15.252 [413/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:15.252 [414/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:15.252 [415/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:15.510 [416/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:15.510 [417/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:15.510 [418/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:15.510 [419/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:15.510 [420/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:15.510 [421/718] Linking static target lib/librte_ipsec.a 00:03:15.510 [422/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:15.767 [423/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.767 [424/718] Linking target lib/librte_ipsec.so.25.0 00:03:15.767 [425/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:15.767 [426/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:15.768 [427/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:16.025 [428/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:16.025 [429/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:16.025 [430/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:16.025 [431/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:16.025 [432/718] Linking static target lib/librte_fib.a 00:03:16.283 [433/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.283 [434/718] Linking target lib/librte_fib.so.25.0 00:03:16.283 [435/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:16.283 [436/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:16.283 [437/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:16.540 [438/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:16.540 [439/718] Linking static target lib/librte_pdcp.a 00:03:16.540 [440/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.540 [441/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:16.540 [442/718] Linking target lib/librte_pdcp.so.25.0 00:03:16.797 [443/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:16.797 [444/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:16.797 [445/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:16.797 [446/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:16.797 [447/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:17.054 [448/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:17.054 [449/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:17.054 [450/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:17.054 [451/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:17.312 [452/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:17.312 [453/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:17.312 [454/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:17.312 [455/718] Linking static target lib/librte_pdump.a 00:03:17.312 [456/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:17.569 [457/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:17.569 [458/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:17.569 [459/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.569 [460/718] Linking target lib/librte_pdump.so.25.0 00:03:17.569 [461/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:17.569 [462/718] Linking static target lib/librte_port.a 00:03:17.827 [463/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:17.827 [464/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:17.827 [465/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:17.827 [466/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:17.827 [467/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:17.827 [468/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:18.085 [469/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.085 [470/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:18.085 [471/718] Linking target lib/librte_port.so.25.0 00:03:18.085 [472/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:18.085 [473/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:18.085 [474/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:18.342 [475/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:18.342 [476/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:18.342 [477/718] Linking static target lib/librte_table.a 00:03:18.342 [478/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:18.600 [479/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:18.600 [480/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:18.600 [481/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.600 [482/718] Linking target lib/librte_table.so.25.0 00:03:18.859 [483/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:18.859 [484/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:18.859 [485/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:18.859 [486/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:19.117 [487/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:19.117 [488/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:19.117 [489/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:19.117 [490/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:19.117 [491/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:19.375 [492/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:19.375 [493/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:19.375 [494/718] Linking static target lib/librte_graph.a 00:03:19.375 [495/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:19.375 [496/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:19.634 [497/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:19.634 [498/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:19.898 [499/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:19.898 [500/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.898 [501/718] Linking target lib/librte_graph.so.25.0 00:03:19.898 [502/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:19.898 [503/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:19.898 [504/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:19.898 [505/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:20.171 [506/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:20.171 [507/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:20.171 [508/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:20.171 [509/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:20.171 [510/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:20.171 [511/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:20.171 [512/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:20.429 [513/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:20.429 [514/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.429 [515/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.429 [516/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.429 [517/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.686 [518/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:20.686 [519/718] Linking static target lib/librte_node.a 00:03:20.686 [520/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.686 [521/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.686 [522/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.686 [523/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.686 [524/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.945 [525/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.945 [526/718] Linking target lib/librte_node.so.25.0 00:03:20.945 [527/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.945 [528/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.945 [529/718] Linking static target drivers/librte_bus_vdev.a 00:03:20.945 [530/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.945 [531/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.945 [532/718] Linking static target drivers/librte_bus_pci.a 00:03:21.202 [533/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:21.202 [534/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:21.202 [535/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:21.202 [536/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:21.202 [537/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:21.202 [538/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.202 [539/718] Linking target drivers/librte_bus_vdev.so.25.0 00:03:21.202 [540/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.202 [541/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.202 [542/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:21.459 [543/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.459 [544/718] Linking target drivers/librte_bus_pci.so.25.0 00:03:21.459 [545/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:21.459 [546/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.459 [547/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.459 [548/718] Linking static target drivers/librte_mempool_ring.a 00:03:21.459 [549/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.459 [550/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:21.459 [551/718] Linking target drivers/librte_mempool_ring.so.25.0 00:03:21.717 [552/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:21.717 [553/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:21.974 [554/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:21.974 [555/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:22.540 [556/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:22.540 [557/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:22.540 [558/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:22.798 [559/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:22.798 [560/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:22.798 [561/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:22.798 [562/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:23.055 [563/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:23.055 [564/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:23.055 [565/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:23.055 [566/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:23.056 [567/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:23.313 [568/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:23.571 [569/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:23.571 [570/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:23.571 [571/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:23.829 [572/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:23.829 [573/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:23.829 [574/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:24.087 [575/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:24.087 [576/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:24.087 [577/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:24.087 [578/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:24.087 [579/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:24.087 [580/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:24.345 [581/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:24.345 [582/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:24.345 [583/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:24.345 [584/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:24.345 [585/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:24.345 [586/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:24.615 [587/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:24.615 [588/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:24.615 [589/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:24.873 [590/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:24.873 [591/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:24.873 [592/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:24.873 [593/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:24.873 [594/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.130 [595/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.130 [596/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.130 [597/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:25.130 [598/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.130 [599/718] Linking static target drivers/librte_net_i40e.a 00:03:25.130 [600/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:25.388 [601/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:25.388 [602/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:25.388 [603/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:25.645 [604/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:25.645 [605/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.645 [606/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:25.645 [607/718] Linking target drivers/librte_net_i40e.so.25.0 00:03:25.645 [608/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:25.645 [609/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:25.902 [610/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:26.160 [611/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:26.160 [612/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:26.160 [613/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:26.160 [614/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:26.160 [615/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:26.160 [616/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:26.417 [617/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:26.417 [618/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:26.417 [619/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:26.417 [620/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:26.417 [621/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:26.674 [622/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:26.674 [623/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:26.932 [624/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:26.932 [625/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.932 [626/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.932 [627/718] Linking static target lib/librte_vhost.a 00:03:27.190 [628/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:27.448 [629/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:27.448 [630/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:27.448 [631/718] Linking static target lib/librte_pipeline.a 00:03:27.448 [632/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:27.448 [633/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:27.705 [634/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:27.705 [635/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:27.705 [636/718] Linking target app/dpdk-dumpcap 00:03:27.705 [637/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.964 [638/718] Linking target app/dpdk-pdump 00:03:27.964 [639/718] Linking target lib/librte_vhost.so.25.0 00:03:27.964 [640/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:27.964 [641/718] Linking target app/dpdk-graph 00:03:27.964 [642/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.964 [643/718] Linking target app/dpdk-proc-info 00:03:28.226 [644/718] Linking target app/dpdk-test-cmdline 00:03:28.226 [645/718] Linking target app/dpdk-test-acl 00:03:28.226 [646/718] Linking target app/dpdk-test-compress-perf 00:03:28.226 [647/718] Linking target app/dpdk-test-crypto-perf 00:03:28.226 [648/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:28.226 [649/718] Linking target app/dpdk-test-fib 00:03:28.226 [650/718] Linking target app/dpdk-test-dma-perf 00:03:28.226 [651/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:28.484 [652/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:28.484 [653/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:28.484 [654/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:28.484 [655/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:28.484 [656/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:28.484 [657/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:28.742 [658/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.742 [659/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:28.742 [660/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:28.742 [661/718] Linking target app/dpdk-test-gpudev 00:03:29.000 [662/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:29.000 [663/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:29.000 [664/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:29.000 [665/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:29.000 [666/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:29.000 [667/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:29.000 [668/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:29.258 [669/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.258 [670/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.258 [671/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.258 [672/718] Linking target app/dpdk-test-bbdev 00:03:29.258 [673/718] Linking target app/dpdk-test-flow-perf 00:03:29.258 [674/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.258 [675/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:29.258 [676/718] Linking target lib/librte_pipeline.so.25.0 00:03:29.258 [677/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:29.518 [678/718] Linking target app/dpdk-test-eventdev 00:03:29.518 [679/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:29.518 [680/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:29.518 [681/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:29.518 [682/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:29.781 [683/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:29.781 [684/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:29.781 [685/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:30.039 [686/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:30.039 [687/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:30.039 [688/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:30.039 [689/718] Linking target app/dpdk-test-pipeline 00:03:30.298 [690/718] Linking target app/dpdk-test-mldev 00:03:30.298 [691/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:30.298 [692/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:30.298 [693/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:30.298 [694/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:30.557 [695/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:30.557 [696/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:30.557 [697/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:30.557 [698/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:30.557 [699/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:30.815 [700/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:30.815 [701/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:30.815 [702/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:31.074 [703/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:31.074 [704/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:31.074 [705/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:31.332 [706/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:31.332 [707/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:31.332 [708/718] Linking target app/dpdk-test-sad 00:03:31.591 [709/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:31.591 [710/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:31.591 [711/718] Linking target app/dpdk-test-regex 00:03:31.850 [712/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:31.850 [713/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:32.111 [714/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:32.111 [715/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:32.111 [716/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:32.111 [717/718] Linking target app/dpdk-test-security-perf 00:03:32.679 [718/718] Linking target app/dpdk-testpmd 00:03:32.679 18:14:21 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:32.679 18:14:21 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:32.679 18:14:21 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:32.679 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:32.679 [0/1] Installing files. 00:03:32.943 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:32.943 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.943 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.944 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.945 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.946 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.947 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.948 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:32.948 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:32.948 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:33.229 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:33.229 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:33.229 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.229 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:33.229 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.229 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.230 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.231 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:33.232 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:33.232 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:33.232 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:33.232 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:33.232 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:33.232 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:33.232 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:33.232 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:33.232 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:33.232 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:33.232 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:33.232 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:33.232 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:33.232 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:33.232 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:33.232 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:33.232 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:33.232 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:33.232 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:33.232 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:33.232 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:33.232 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:33.232 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:33.232 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:33.232 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:33.232 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:33.232 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:33.232 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:33.232 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:33.232 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:33.232 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:33.232 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:33.232 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:33.232 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:33.232 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:33.232 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:33.232 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:33.232 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:33.232 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:33.232 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:33.232 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:33.232 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:33.232 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:33.232 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:33.232 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:33.232 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:33.232 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:33.232 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:33.232 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:33.232 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:33.232 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:33.232 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:33.232 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:33.232 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:33.232 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:33.232 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:33.232 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:33.232 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:33.232 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:33.232 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:33.232 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:33.232 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:33.232 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:33.232 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:33.232 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:33.232 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:33.232 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:33.232 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:33.232 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:33.232 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:33.232 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:33.232 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:33.232 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:33.232 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:33.232 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:33.232 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:33.232 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:33.232 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:33.232 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:33.232 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:33.232 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:33.233 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:33.233 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:33.233 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:33.233 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:33.233 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:33.233 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:33.233 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:33.233 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:33.233 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:33.233 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:33.233 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:33.233 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:33.233 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:33.233 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:33.233 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:33.233 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:33.233 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:33.233 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:33.233 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:33.233 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:33.233 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:33.233 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:33.233 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:33.233 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:33.233 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:33.233 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:33.233 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:33.233 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:33.233 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:33.233 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:33.233 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:33.233 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:33.233 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:33.233 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:33.233 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:33.233 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:33.233 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:33.233 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:33.233 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:33.233 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:33.233 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:33.233 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:33.233 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:33.233 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:33.233 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:33.233 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:33.233 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:33.233 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:33.233 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:33.233 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:33.233 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:33.233 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:33.233 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:33.233 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:33.233 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:33.491 18:14:22 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:33.491 ************************************ 00:03:33.491 END TEST build_native_dpdk 00:03:33.491 ************************************ 00:03:33.491 18:14:22 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:33.491 00:03:33.491 real 0m38.522s 00:03:33.491 user 4m25.884s 00:03:33.491 sys 0m41.994s 00:03:33.491 18:14:22 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:33.491 18:14:22 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:33.491 18:14:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:33.491 18:14:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:33.491 18:14:22 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:33.491 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:33.491 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:33.491 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:33.491 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:33.749 Using 'verbs' RDMA provider 00:03:45.152 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:57.374 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:57.374 Creating mk/config.mk...done. 00:03:57.374 Creating mk/cc.flags.mk...done. 00:03:57.374 Type 'make' to build. 00:03:57.374 18:14:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:57.374 18:14:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:57.374 18:14:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:57.374 18:14:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.374 ************************************ 00:03:57.374 START TEST make 00:03:57.374 ************************************ 00:03:57.374 18:14:45 make -- common/autotest_common.sh@1125 -- $ make -j10 00:03:57.374 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:57.374 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:57.374 meson setup builddir \ 00:03:57.374 -Dwith-libaio=enabled \ 00:03:57.374 -Dwith-liburing=enabled \ 00:03:57.374 -Dwith-libvfn=disabled \ 00:03:57.374 -Dwith-spdk=false && \ 00:03:57.374 meson compile -C builddir && \ 00:03:57.374 cd -) 00:03:57.374 make[1]: Nothing to be done for 'all'. 00:03:58.747 The Meson build system 00:03:58.747 Version: 1.5.0 00:03:58.747 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:58.747 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:58.747 Build type: native build 00:03:58.747 Project name: xnvme 00:03:58.747 Project version: 0.7.3 00:03:58.747 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:58.747 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:58.747 Host machine cpu family: x86_64 00:03:58.747 Host machine cpu: x86_64 00:03:58.747 Message: host_machine.system: linux 00:03:58.747 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:58.747 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:58.747 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:58.747 Run-time dependency threads found: YES 00:03:58.747 Has header "setupapi.h" : NO 00:03:58.747 Has header "linux/blkzoned.h" : YES 00:03:58.747 Has header "linux/blkzoned.h" : YES (cached) 00:03:58.747 Has header "libaio.h" : YES 00:03:58.747 Library aio found: YES 00:03:58.747 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:58.747 Run-time dependency liburing found: YES 2.2 00:03:58.747 Dependency libvfn skipped: feature with-libvfn disabled 00:03:58.747 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.747 Run-time dependency appleframeworks found: NO (tried framework) 00:03:58.747 Configuring xnvme_config.h using configuration 00:03:58.747 Configuring xnvme.spec using configuration 00:03:58.747 Run-time dependency bash-completion found: YES 2.11 00:03:58.747 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:58.747 Program cp found: YES (/usr/bin/cp) 00:03:58.747 Has header "winsock2.h" : NO 00:03:58.747 Has header "dbghelp.h" : NO 00:03:58.747 Library rpcrt4 found: NO 00:03:58.747 Library rt found: YES 00:03:58.747 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:58.747 Found CMake: /usr/bin/cmake (3.27.7) 00:03:58.747 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:58.747 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:58.747 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:58.747 Build targets in project: 32 00:03:58.747 00:03:58.747 xnvme 0.7.3 00:03:58.747 00:03:58.747 User defined options 00:03:58.747 with-libaio : enabled 00:03:58.747 with-liburing: enabled 00:03:58.747 with-libvfn : disabled 00:03:58.747 with-spdk : false 00:03:58.747 00:03:58.747 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:59.005 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:59.005 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:59.273 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:59.273 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:59.273 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:59.273 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:59.273 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:59.273 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:59.273 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:59.273 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:59.273 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:59.273 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:59.273 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:59.273 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:59.273 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:59.273 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:59.273 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:59.273 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:59.273 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:59.273 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:59.273 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:59.273 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:59.273 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:59.273 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:59.273 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:59.531 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:59.531 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:59.531 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:59.531 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:59.531 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:59.531 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:59.531 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:59.531 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:59.531 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:59.531 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:59.531 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:59.531 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:59.531 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:59.531 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:59.531 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:59.531 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:59.531 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:59.531 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:59.531 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:59.531 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:59.531 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:59.531 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:59.531 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:59.531 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:59.531 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:59.531 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:59.531 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:59.531 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:59.531 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:59.531 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:59.531 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:59.531 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:59.531 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:59.531 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:59.531 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:59.531 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:59.531 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:59.788 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:59.788 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:59.788 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:59.788 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:59.788 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:59.788 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:59.788 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:59.788 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:59.788 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:59.788 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:59.788 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:59.788 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:59.788 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:59.788 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:59.788 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:59.788 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:59.788 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:59.788 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:00.046 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:00.046 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:00.046 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:00.046 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:00.046 [84/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:00.046 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:00.046 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:00.046 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:00.046 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:00.046 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:00.046 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:00.046 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:00.046 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:00.046 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:00.046 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:00.046 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:00.046 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:00.046 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:00.046 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:00.046 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:00.046 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:00.046 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:00.046 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:00.046 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:00.046 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:00.305 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:00.305 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:00.305 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:00.305 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:00.305 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:00.305 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:00.305 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:00.305 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:00.305 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:00.305 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:00.305 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:00.305 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:00.305 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:00.305 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:00.305 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:00.305 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:00.305 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:00.305 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:00.305 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:00.305 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:00.305 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:00.305 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:00.305 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:00.305 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:00.305 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:00.305 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:00.305 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:00.305 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:00.305 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:00.305 [134/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:00.305 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:00.563 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:00.563 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:00.564 [138/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:00.564 [139/203] Linking target lib/libxnvme.so 00:04:00.564 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:00.564 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:00.564 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:00.564 [143/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:00.564 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:00.564 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:00.564 [146/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:00.564 [147/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:00.564 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:00.564 [149/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:00.564 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:00.564 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:00.564 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:00.564 [153/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:00.564 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:00.822 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:00.822 [156/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:00.822 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:00.822 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:00.822 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:00.822 [160/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:00.822 [161/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:00.822 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:00.822 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:00.822 [164/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:00.822 [165/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:00.822 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:00.822 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:00.822 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:00.822 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:00.822 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:00.822 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:01.080 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:01.080 [173/203] Linking static target lib/libxnvme.a 00:04:01.080 [174/203] Linking target tests/xnvme_tests_xnvme_file 00:04:01.080 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:01.080 [176/203] Linking target tests/xnvme_tests_enum 00:04:01.080 [177/203] Linking target tests/xnvme_tests_buf 00:04:01.080 [178/203] Linking target tests/xnvme_tests_cli 00:04:01.080 [179/203] Linking target tests/xnvme_tests_scc 00:04:01.080 [180/203] Linking target tests/xnvme_tests_znd_append 00:04:01.080 [181/203] Linking target tests/xnvme_tests_lblk 00:04:01.080 [182/203] Linking target tests/xnvme_tests_ioworker 00:04:01.080 [183/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:01.080 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:01.080 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:01.080 [186/203] Linking target tests/xnvme_tests_znd_state 00:04:01.080 [187/203] Linking target tests/xnvme_tests_map 00:04:01.080 [188/203] Linking target tests/xnvme_tests_kvs 00:04:01.080 [189/203] Linking target tools/xdd 00:04:01.080 [190/203] Linking target tools/lblk 00:04:01.080 [191/203] Linking target tools/xnvme_file 00:04:01.080 [192/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:01.080 [193/203] Linking target examples/xnvme_enum 00:04:01.080 [194/203] Linking target tools/kvs 00:04:01.080 [195/203] Linking target examples/xnvme_dev 00:04:01.080 [196/203] Linking target tools/zoned 00:04:01.080 [197/203] Linking target examples/xnvme_hello 00:04:01.080 [198/203] Linking target examples/zoned_io_sync 00:04:01.080 [199/203] Linking target examples/xnvme_single_async 00:04:01.080 [200/203] Linking target examples/xnvme_io_async 00:04:01.080 [201/203] Linking target examples/xnvme_single_sync 00:04:01.080 [202/203] Linking target examples/zoned_io_async 00:04:01.080 [203/203] Linking target tools/xnvme 00:04:01.080 INFO: autodetecting backend as ninja 00:04:01.080 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:01.080 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:33.191 CC lib/ut/ut.o 00:04:33.191 CC lib/ut_mock/mock.o 00:04:33.191 CC lib/log/log.o 00:04:33.191 CC lib/log/log_flags.o 00:04:33.191 CC lib/log/log_deprecated.o 00:04:33.191 LIB libspdk_ut.a 00:04:33.191 LIB libspdk_ut_mock.a 00:04:33.191 LIB libspdk_log.a 00:04:33.191 SO libspdk_ut.so.2.0 00:04:33.191 SO libspdk_ut_mock.so.6.0 00:04:33.191 SO libspdk_log.so.7.0 00:04:33.191 SYMLINK libspdk_ut.so 00:04:33.191 SYMLINK libspdk_ut_mock.so 00:04:33.191 SYMLINK libspdk_log.so 00:04:33.191 CC lib/dma/dma.o 00:04:33.191 CC lib/ioat/ioat.o 00:04:33.191 CXX lib/trace_parser/trace.o 00:04:33.191 CC lib/util/base64.o 00:04:33.191 CC lib/util/bit_array.o 00:04:33.191 CC lib/util/cpuset.o 00:04:33.191 CC lib/util/crc32.o 00:04:33.191 CC lib/util/crc16.o 00:04:33.191 CC lib/util/crc32c.o 00:04:33.191 CC lib/vfio_user/host/vfio_user_pci.o 00:04:33.191 CC lib/util/crc32_ieee.o 00:04:33.191 CC lib/util/crc64.o 00:04:33.191 CC lib/util/dif.o 00:04:33.191 CC lib/util/fd.o 00:04:33.191 CC lib/util/fd_group.o 00:04:33.191 CC lib/util/file.o 00:04:33.191 LIB libspdk_dma.a 00:04:33.191 CC lib/util/hexlify.o 00:04:33.191 CC lib/util/iov.o 00:04:33.191 SO libspdk_dma.so.5.0 00:04:33.191 SYMLINK libspdk_dma.so 00:04:33.191 CC lib/util/math.o 00:04:33.191 CC lib/util/net.o 00:04:33.191 LIB libspdk_ioat.a 00:04:33.191 CC lib/util/pipe.o 00:04:33.191 SO libspdk_ioat.so.7.0 00:04:33.191 CC lib/vfio_user/host/vfio_user.o 00:04:33.191 CC lib/util/strerror_tls.o 00:04:33.191 CC lib/util/string.o 00:04:33.191 SYMLINK libspdk_ioat.so 00:04:33.191 CC lib/util/uuid.o 00:04:33.191 CC lib/util/xor.o 00:04:33.191 CC lib/util/zipf.o 00:04:33.191 CC lib/util/md5.o 00:04:33.191 LIB libspdk_vfio_user.a 00:04:33.191 SO libspdk_vfio_user.so.5.0 00:04:33.191 SYMLINK libspdk_vfio_user.so 00:04:33.191 LIB libspdk_util.a 00:04:33.449 SO libspdk_util.so.10.0 00:04:33.449 LIB libspdk_trace_parser.a 00:04:33.449 SYMLINK libspdk_util.so 00:04:33.449 SO libspdk_trace_parser.so.6.0 00:04:33.707 SYMLINK libspdk_trace_parser.so 00:04:33.707 CC lib/rdma_utils/rdma_utils.o 00:04:33.707 CC lib/rdma_provider/common.o 00:04:33.707 CC lib/vmd/vmd.o 00:04:33.707 CC lib/vmd/led.o 00:04:33.708 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:33.708 CC lib/conf/conf.o 00:04:33.708 CC lib/idxd/idxd_user.o 00:04:33.708 CC lib/idxd/idxd.o 00:04:33.708 CC lib/json/json_parse.o 00:04:33.708 CC lib/env_dpdk/env.o 00:04:33.708 LIB libspdk_conf.a 00:04:33.708 CC lib/env_dpdk/memory.o 00:04:33.708 CC lib/env_dpdk/pci.o 00:04:33.708 SO libspdk_conf.so.6.0 00:04:33.708 LIB libspdk_rdma_provider.a 00:04:33.708 CC lib/json/json_util.o 00:04:33.708 SO libspdk_rdma_provider.so.6.0 00:04:33.966 CC lib/idxd/idxd_kernel.o 00:04:33.966 SYMLINK libspdk_conf.so 00:04:33.966 CC lib/env_dpdk/init.o 00:04:33.966 LIB libspdk_rdma_utils.a 00:04:33.966 SYMLINK libspdk_rdma_provider.so 00:04:33.966 CC lib/env_dpdk/threads.o 00:04:33.966 SO libspdk_rdma_utils.so.1.0 00:04:33.966 SYMLINK libspdk_rdma_utils.so 00:04:33.966 CC lib/json/json_write.o 00:04:33.966 CC lib/env_dpdk/pci_ioat.o 00:04:33.966 CC lib/env_dpdk/pci_virtio.o 00:04:33.966 CC lib/env_dpdk/pci_vmd.o 00:04:33.966 CC lib/env_dpdk/pci_idxd.o 00:04:33.966 CC lib/env_dpdk/pci_event.o 00:04:34.224 CC lib/env_dpdk/sigbus_handler.o 00:04:34.224 CC lib/env_dpdk/pci_dpdk.o 00:04:34.224 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:34.224 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:34.224 LIB libspdk_json.a 00:04:34.224 SO libspdk_json.so.6.0 00:04:34.224 LIB libspdk_idxd.a 00:04:34.224 SO libspdk_idxd.so.12.1 00:04:34.224 LIB libspdk_vmd.a 00:04:34.224 SYMLINK libspdk_json.so 00:04:34.224 SO libspdk_vmd.so.6.0 00:04:34.224 SYMLINK libspdk_idxd.so 00:04:34.482 SYMLINK libspdk_vmd.so 00:04:34.482 CC lib/jsonrpc/jsonrpc_client.o 00:04:34.482 CC lib/jsonrpc/jsonrpc_server.o 00:04:34.482 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:34.482 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:34.740 LIB libspdk_jsonrpc.a 00:04:34.740 SO libspdk_jsonrpc.so.6.0 00:04:34.740 SYMLINK libspdk_jsonrpc.so 00:04:34.999 LIB libspdk_env_dpdk.a 00:04:34.999 CC lib/rpc/rpc.o 00:04:34.999 SO libspdk_env_dpdk.so.15.0 00:04:35.257 SYMLINK libspdk_env_dpdk.so 00:04:35.257 LIB libspdk_rpc.a 00:04:35.257 SO libspdk_rpc.so.6.0 00:04:35.257 SYMLINK libspdk_rpc.so 00:04:35.514 CC lib/notify/notify.o 00:04:35.515 CC lib/notify/notify_rpc.o 00:04:35.515 CC lib/trace/trace_flags.o 00:04:35.515 CC lib/trace/trace.o 00:04:35.515 CC lib/trace/trace_rpc.o 00:04:35.515 CC lib/keyring/keyring.o 00:04:35.515 CC lib/keyring/keyring_rpc.o 00:04:35.515 LIB libspdk_notify.a 00:04:35.777 SO libspdk_notify.so.6.0 00:04:35.777 SYMLINK libspdk_notify.so 00:04:35.777 LIB libspdk_keyring.a 00:04:35.777 LIB libspdk_trace.a 00:04:35.777 SO libspdk_keyring.so.2.0 00:04:35.777 SO libspdk_trace.so.11.0 00:04:35.777 SYMLINK libspdk_keyring.so 00:04:35.777 SYMLINK libspdk_trace.so 00:04:36.047 CC lib/thread/iobuf.o 00:04:36.047 CC lib/thread/thread.o 00:04:36.047 CC lib/sock/sock.o 00:04:36.047 CC lib/sock/sock_rpc.o 00:04:36.613 LIB libspdk_sock.a 00:04:36.614 SO libspdk_sock.so.10.0 00:04:36.614 SYMLINK libspdk_sock.so 00:04:36.872 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:36.872 CC lib/nvme/nvme_fabric.o 00:04:36.872 CC lib/nvme/nvme_ctrlr.o 00:04:36.872 CC lib/nvme/nvme_ns_cmd.o 00:04:36.872 CC lib/nvme/nvme_ns.o 00:04:36.872 CC lib/nvme/nvme_pcie_common.o 00:04:36.872 CC lib/nvme/nvme.o 00:04:36.872 CC lib/nvme/nvme_qpair.o 00:04:36.872 CC lib/nvme/nvme_pcie.o 00:04:37.438 CC lib/nvme/nvme_quirks.o 00:04:37.438 CC lib/nvme/nvme_transport.o 00:04:37.438 CC lib/nvme/nvme_discovery.o 00:04:37.438 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:37.695 LIB libspdk_thread.a 00:04:37.695 SO libspdk_thread.so.10.2 00:04:37.695 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:37.695 SYMLINK libspdk_thread.so 00:04:37.695 CC lib/nvme/nvme_tcp.o 00:04:37.695 CC lib/nvme/nvme_opal.o 00:04:37.954 CC lib/accel/accel.o 00:04:37.954 CC lib/nvme/nvme_io_msg.o 00:04:37.954 CC lib/nvme/nvme_poll_group.o 00:04:37.954 CC lib/nvme/nvme_zns.o 00:04:37.954 CC lib/nvme/nvme_stubs.o 00:04:38.213 CC lib/nvme/nvme_auth.o 00:04:38.213 CC lib/nvme/nvme_cuse.o 00:04:38.213 CC lib/nvme/nvme_rdma.o 00:04:38.471 CC lib/accel/accel_rpc.o 00:04:38.471 CC lib/blob/blobstore.o 00:04:38.471 CC lib/blob/request.o 00:04:38.471 CC lib/blob/zeroes.o 00:04:38.729 CC lib/init/json_config.o 00:04:38.729 CC lib/init/subsystem.o 00:04:38.729 CC lib/virtio/virtio.o 00:04:38.729 CC lib/virtio/virtio_vhost_user.o 00:04:38.987 CC lib/blob/blob_bs_dev.o 00:04:38.987 CC lib/init/subsystem_rpc.o 00:04:38.987 CC lib/accel/accel_sw.o 00:04:38.987 CC lib/init/rpc.o 00:04:38.987 CC lib/virtio/virtio_vfio_user.o 00:04:38.987 CC lib/virtio/virtio_pci.o 00:04:39.245 CC lib/fsdev/fsdev.o 00:04:39.245 CC lib/fsdev/fsdev_io.o 00:04:39.245 LIB libspdk_init.a 00:04:39.245 CC lib/fsdev/fsdev_rpc.o 00:04:39.245 SO libspdk_init.so.6.0 00:04:39.245 LIB libspdk_accel.a 00:04:39.245 SO libspdk_accel.so.16.0 00:04:39.245 SYMLINK libspdk_init.so 00:04:39.245 SYMLINK libspdk_accel.so 00:04:39.503 CC lib/event/app.o 00:04:39.503 CC lib/event/reactor.o 00:04:39.503 CC lib/event/app_rpc.o 00:04:39.503 CC lib/event/log_rpc.o 00:04:39.503 LIB libspdk_virtio.a 00:04:39.503 CC lib/bdev/bdev.o 00:04:39.503 SO libspdk_virtio.so.7.0 00:04:39.503 CC lib/bdev/bdev_rpc.o 00:04:39.503 SYMLINK libspdk_virtio.so 00:04:39.503 CC lib/bdev/bdev_zone.o 00:04:39.503 CC lib/event/scheduler_static.o 00:04:39.760 LIB libspdk_nvme.a 00:04:39.760 CC lib/bdev/part.o 00:04:39.760 CC lib/bdev/scsi_nvme.o 00:04:39.760 LIB libspdk_fsdev.a 00:04:39.760 SO libspdk_nvme.so.14.0 00:04:39.760 SO libspdk_fsdev.so.1.0 00:04:39.760 LIB libspdk_event.a 00:04:39.760 SYMLINK libspdk_fsdev.so 00:04:40.018 SO libspdk_event.so.15.0 00:04:40.018 SYMLINK libspdk_event.so 00:04:40.018 SYMLINK libspdk_nvme.so 00:04:40.018 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:40.952 LIB libspdk_fuse_dispatcher.a 00:04:40.952 SO libspdk_fuse_dispatcher.so.1.0 00:04:40.952 SYMLINK libspdk_fuse_dispatcher.so 00:04:41.888 LIB libspdk_blob.a 00:04:41.888 SO libspdk_blob.so.11.0 00:04:42.147 SYMLINK libspdk_blob.so 00:04:42.147 LIB libspdk_bdev.a 00:04:42.147 SO libspdk_bdev.so.17.0 00:04:42.147 CC lib/blobfs/blobfs.o 00:04:42.147 CC lib/blobfs/tree.o 00:04:42.147 CC lib/lvol/lvol.o 00:04:42.147 SYMLINK libspdk_bdev.so 00:04:42.406 CC lib/nbd/nbd.o 00:04:42.406 CC lib/nbd/nbd_rpc.o 00:04:42.406 CC lib/scsi/dev.o 00:04:42.406 CC lib/ublk/ublk.o 00:04:42.406 CC lib/ublk/ublk_rpc.o 00:04:42.406 CC lib/scsi/lun.o 00:04:42.406 CC lib/nvmf/ctrlr.o 00:04:42.406 CC lib/ftl/ftl_core.o 00:04:42.664 CC lib/scsi/port.o 00:04:42.664 CC lib/scsi/scsi.o 00:04:42.664 CC lib/scsi/scsi_bdev.o 00:04:42.664 CC lib/scsi/scsi_pr.o 00:04:42.664 CC lib/scsi/scsi_rpc.o 00:04:42.664 CC lib/nvmf/ctrlr_discovery.o 00:04:42.664 CC lib/ftl/ftl_init.o 00:04:42.922 LIB libspdk_nbd.a 00:04:42.922 CC lib/scsi/task.o 00:04:42.922 SO libspdk_nbd.so.7.0 00:04:42.922 CC lib/ftl/ftl_layout.o 00:04:42.922 SYMLINK libspdk_nbd.so 00:04:42.922 CC lib/ftl/ftl_debug.o 00:04:42.922 CC lib/ftl/ftl_io.o 00:04:42.922 CC lib/ftl/ftl_sb.o 00:04:42.922 LIB libspdk_scsi.a 00:04:43.183 LIB libspdk_blobfs.a 00:04:43.183 LIB libspdk_ublk.a 00:04:43.183 CC lib/ftl/ftl_l2p.o 00:04:43.183 SO libspdk_ublk.so.3.0 00:04:43.183 SO libspdk_scsi.so.9.0 00:04:43.183 SO libspdk_blobfs.so.10.0 00:04:43.183 CC lib/nvmf/ctrlr_bdev.o 00:04:43.183 CC lib/nvmf/subsystem.o 00:04:43.183 CC lib/nvmf/nvmf.o 00:04:43.183 LIB libspdk_lvol.a 00:04:43.183 CC lib/nvmf/nvmf_rpc.o 00:04:43.183 SYMLINK libspdk_ublk.so 00:04:43.183 CC lib/nvmf/transport.o 00:04:43.183 SYMLINK libspdk_blobfs.so 00:04:43.183 SYMLINK libspdk_scsi.so 00:04:43.183 CC lib/ftl/ftl_l2p_flat.o 00:04:43.183 SO libspdk_lvol.so.10.0 00:04:43.183 SYMLINK libspdk_lvol.so 00:04:43.183 CC lib/nvmf/tcp.o 00:04:43.183 CC lib/iscsi/conn.o 00:04:43.443 CC lib/iscsi/init_grp.o 00:04:43.443 CC lib/ftl/ftl_nv_cache.o 00:04:43.701 CC lib/nvmf/stubs.o 00:04:43.701 CC lib/iscsi/iscsi.o 00:04:43.701 CC lib/ftl/ftl_band.o 00:04:43.959 CC lib/ftl/ftl_band_ops.o 00:04:43.959 CC lib/ftl/ftl_writer.o 00:04:43.959 CC lib/ftl/ftl_rq.o 00:04:43.959 CC lib/ftl/ftl_reloc.o 00:04:44.217 CC lib/nvmf/mdns_server.o 00:04:44.217 CC lib/ftl/ftl_l2p_cache.o 00:04:44.217 CC lib/nvmf/rdma.o 00:04:44.217 CC lib/iscsi/param.o 00:04:44.217 CC lib/nvmf/auth.o 00:04:44.475 CC lib/ftl/ftl_p2l.o 00:04:44.475 CC lib/ftl/ftl_p2l_log.o 00:04:44.475 CC lib/iscsi/portal_grp.o 00:04:44.475 CC lib/vhost/vhost.o 00:04:44.475 CC lib/vhost/vhost_rpc.o 00:04:44.753 CC lib/iscsi/tgt_node.o 00:04:44.753 CC lib/vhost/vhost_scsi.o 00:04:44.753 CC lib/vhost/vhost_blk.o 00:04:44.753 CC lib/ftl/mngt/ftl_mngt.o 00:04:44.753 CC lib/vhost/rte_vhost_user.o 00:04:44.753 CC lib/iscsi/iscsi_subsystem.o 00:04:45.011 CC lib/iscsi/iscsi_rpc.o 00:04:45.011 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:45.011 CC lib/iscsi/task.o 00:04:45.011 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:45.011 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:45.011 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:45.269 LIB libspdk_iscsi.a 00:04:45.269 SO libspdk_iscsi.so.8.0 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:45.269 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:45.528 CC lib/ftl/utils/ftl_conf.o 00:04:45.528 SYMLINK libspdk_iscsi.so 00:04:45.528 CC lib/ftl/utils/ftl_md.o 00:04:45.528 CC lib/ftl/utils/ftl_mempool.o 00:04:45.528 CC lib/ftl/utils/ftl_bitmap.o 00:04:45.528 CC lib/ftl/utils/ftl_property.o 00:04:45.528 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:45.528 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:45.528 LIB libspdk_vhost.a 00:04:45.528 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:45.528 SO libspdk_vhost.so.8.0 00:04:45.786 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:45.786 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:45.786 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:45.786 SYMLINK libspdk_vhost.so 00:04:45.786 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:45.786 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:45.786 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:45.786 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:45.786 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:45.786 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:45.786 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:45.786 CC lib/ftl/base/ftl_base_dev.o 00:04:45.786 CC lib/ftl/base/ftl_base_bdev.o 00:04:45.786 CC lib/ftl/ftl_trace.o 00:04:46.045 LIB libspdk_ftl.a 00:04:46.045 LIB libspdk_nvmf.a 00:04:46.304 SO libspdk_nvmf.so.19.0 00:04:46.304 SO libspdk_ftl.so.9.0 00:04:46.304 SYMLINK libspdk_nvmf.so 00:04:46.562 SYMLINK libspdk_ftl.so 00:04:46.819 CC module/env_dpdk/env_dpdk_rpc.o 00:04:46.819 CC module/blob/bdev/blob_bdev.o 00:04:46.819 CC module/accel/iaa/accel_iaa.o 00:04:46.819 CC module/sock/posix/posix.o 00:04:46.819 CC module/accel/dsa/accel_dsa.o 00:04:46.819 CC module/accel/ioat/accel_ioat.o 00:04:46.819 CC module/fsdev/aio/fsdev_aio.o 00:04:46.819 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:46.819 CC module/keyring/file/keyring.o 00:04:46.819 CC module/accel/error/accel_error.o 00:04:46.819 LIB libspdk_env_dpdk_rpc.a 00:04:46.819 SO libspdk_env_dpdk_rpc.so.6.0 00:04:46.819 SYMLINK libspdk_env_dpdk_rpc.so 00:04:46.819 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:46.819 CC module/keyring/file/keyring_rpc.o 00:04:46.819 CC module/accel/error/accel_error_rpc.o 00:04:47.077 CC module/accel/ioat/accel_ioat_rpc.o 00:04:47.077 CC module/accel/iaa/accel_iaa_rpc.o 00:04:47.077 LIB libspdk_keyring_file.a 00:04:47.077 LIB libspdk_scheduler_dynamic.a 00:04:47.077 CC module/accel/dsa/accel_dsa_rpc.o 00:04:47.077 SO libspdk_scheduler_dynamic.so.4.0 00:04:47.077 SO libspdk_keyring_file.so.2.0 00:04:47.077 LIB libspdk_blob_bdev.a 00:04:47.077 LIB libspdk_accel_error.a 00:04:47.077 SO libspdk_blob_bdev.so.11.0 00:04:47.077 SO libspdk_accel_error.so.2.0 00:04:47.077 SYMLINK libspdk_keyring_file.so 00:04:47.077 SYMLINK libspdk_scheduler_dynamic.so 00:04:47.077 CC module/fsdev/aio/linux_aio_mgr.o 00:04:47.077 LIB libspdk_accel_iaa.a 00:04:47.077 LIB libspdk_accel_ioat.a 00:04:47.077 LIB libspdk_accel_dsa.a 00:04:47.077 SYMLINK libspdk_blob_bdev.so 00:04:47.077 SYMLINK libspdk_accel_error.so 00:04:47.077 SO libspdk_accel_iaa.so.3.0 00:04:47.077 SO libspdk_accel_dsa.so.5.0 00:04:47.077 SO libspdk_accel_ioat.so.6.0 00:04:47.077 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:47.077 SYMLINK libspdk_accel_iaa.so 00:04:47.077 SYMLINK libspdk_accel_ioat.so 00:04:47.077 SYMLINK libspdk_accel_dsa.so 00:04:47.077 CC module/keyring/linux/keyring.o 00:04:47.335 LIB libspdk_scheduler_dpdk_governor.a 00:04:47.335 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:47.335 CC module/keyring/linux/keyring_rpc.o 00:04:47.335 CC module/scheduler/gscheduler/gscheduler.o 00:04:47.335 CC module/bdev/gpt/gpt.o 00:04:47.335 CC module/bdev/delay/vbdev_delay.o 00:04:47.335 CC module/bdev/error/vbdev_error.o 00:04:47.335 CC module/blobfs/bdev/blobfs_bdev.o 00:04:47.335 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:47.335 CC module/bdev/error/vbdev_error_rpc.o 00:04:47.335 LIB libspdk_keyring_linux.a 00:04:47.335 LIB libspdk_scheduler_gscheduler.a 00:04:47.335 SO libspdk_keyring_linux.so.1.0 00:04:47.335 SO libspdk_scheduler_gscheduler.so.4.0 00:04:47.335 CC module/bdev/gpt/vbdev_gpt.o 00:04:47.593 LIB libspdk_fsdev_aio.a 00:04:47.593 CC module/bdev/lvol/vbdev_lvol.o 00:04:47.593 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:47.593 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:47.593 SO libspdk_fsdev_aio.so.1.0 00:04:47.593 SYMLINK libspdk_keyring_linux.so 00:04:47.593 SYMLINK libspdk_scheduler_gscheduler.so 00:04:47.593 LIB libspdk_bdev_error.a 00:04:47.593 LIB libspdk_sock_posix.a 00:04:47.593 SYMLINK libspdk_fsdev_aio.so 00:04:47.593 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:47.593 SO libspdk_bdev_error.so.6.0 00:04:47.593 SO libspdk_sock_posix.so.6.0 00:04:47.593 LIB libspdk_blobfs_bdev.a 00:04:47.593 SYMLINK libspdk_bdev_error.so 00:04:47.593 SO libspdk_blobfs_bdev.so.6.0 00:04:47.593 CC module/bdev/malloc/bdev_malloc.o 00:04:47.593 SYMLINK libspdk_sock_posix.so 00:04:47.593 CC module/bdev/null/bdev_null.o 00:04:47.593 CC module/bdev/null/bdev_null_rpc.o 00:04:47.593 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:47.593 SYMLINK libspdk_blobfs_bdev.so 00:04:47.593 LIB libspdk_bdev_gpt.a 00:04:47.593 LIB libspdk_bdev_delay.a 00:04:47.851 CC module/bdev/nvme/bdev_nvme.o 00:04:47.851 SO libspdk_bdev_gpt.so.6.0 00:04:47.851 SO libspdk_bdev_delay.so.6.0 00:04:47.851 SYMLINK libspdk_bdev_gpt.so 00:04:47.851 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:47.851 CC module/bdev/passthru/vbdev_passthru.o 00:04:47.851 SYMLINK libspdk_bdev_delay.so 00:04:47.851 CC module/bdev/nvme/nvme_rpc.o 00:04:47.851 LIB libspdk_bdev_null.a 00:04:47.851 SO libspdk_bdev_null.so.6.0 00:04:47.851 CC module/bdev/split/vbdev_split.o 00:04:47.851 CC module/bdev/raid/bdev_raid.o 00:04:47.851 SYMLINK libspdk_bdev_null.so 00:04:47.851 LIB libspdk_bdev_lvol.a 00:04:48.108 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:48.108 SO libspdk_bdev_lvol.so.6.0 00:04:48.108 LIB libspdk_bdev_malloc.a 00:04:48.108 SO libspdk_bdev_malloc.so.6.0 00:04:48.108 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:48.108 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:48.108 SYMLINK libspdk_bdev_lvol.so 00:04:48.108 CC module/bdev/nvme/bdev_mdns_client.o 00:04:48.108 SYMLINK libspdk_bdev_malloc.so 00:04:48.108 CC module/bdev/xnvme/bdev_xnvme.o 00:04:48.108 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:48.108 CC module/bdev/split/vbdev_split_rpc.o 00:04:48.108 LIB libspdk_bdev_passthru.a 00:04:48.108 SO libspdk_bdev_passthru.so.6.0 00:04:48.108 SYMLINK libspdk_bdev_passthru.so 00:04:48.108 CC module/bdev/raid/bdev_raid_rpc.o 00:04:48.108 CC module/bdev/raid/bdev_raid_sb.o 00:04:48.365 LIB libspdk_bdev_split.a 00:04:48.365 LIB libspdk_bdev_zone_block.a 00:04:48.365 SO libspdk_bdev_split.so.6.0 00:04:48.365 SO libspdk_bdev_zone_block.so.6.0 00:04:48.365 LIB libspdk_bdev_xnvme.a 00:04:48.365 SYMLINK libspdk_bdev_split.so 00:04:48.365 SO libspdk_bdev_xnvme.so.3.0 00:04:48.365 SYMLINK libspdk_bdev_zone_block.so 00:04:48.365 CC module/bdev/ftl/bdev_ftl.o 00:04:48.365 CC module/bdev/aio/bdev_aio.o 00:04:48.365 CC module/bdev/raid/raid0.o 00:04:48.365 CC module/bdev/nvme/vbdev_opal.o 00:04:48.365 SYMLINK libspdk_bdev_xnvme.so 00:04:48.365 CC module/bdev/aio/bdev_aio_rpc.o 00:04:48.365 CC module/bdev/raid/raid1.o 00:04:48.622 CC module/bdev/iscsi/bdev_iscsi.o 00:04:48.622 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:48.622 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:48.622 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:48.622 CC module/bdev/raid/concat.o 00:04:48.622 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:48.622 LIB libspdk_bdev_aio.a 00:04:48.622 SO libspdk_bdev_aio.so.6.0 00:04:49.499 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:49.499 LIB libspdk_bdev_ftl.a 00:04:49.499 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:49.499 SYMLINK libspdk_bdev_aio.so 00:04:49.499 SO libspdk_bdev_ftl.so.6.0 00:04:49.500 LIB libspdk_bdev_raid.a 00:04:49.500 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:49.500 SYMLINK libspdk_bdev_ftl.so 00:04:49.500 SO libspdk_bdev_raid.so.6.0 00:04:49.500 LIB libspdk_bdev_iscsi.a 00:04:49.500 SYMLINK libspdk_bdev_raid.so 00:04:49.500 SO libspdk_bdev_iscsi.so.6.0 00:04:49.500 LIB libspdk_bdev_virtio.a 00:04:49.500 SYMLINK libspdk_bdev_iscsi.so 00:04:49.500 SO libspdk_bdev_virtio.so.6.0 00:04:49.500 SYMLINK libspdk_bdev_virtio.so 00:04:49.761 LIB libspdk_bdev_nvme.a 00:04:49.761 SO libspdk_bdev_nvme.so.7.0 00:04:49.761 SYMLINK libspdk_bdev_nvme.so 00:04:50.329 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:50.329 CC module/event/subsystems/keyring/keyring.o 00:04:50.329 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:50.329 CC module/event/subsystems/vmd/vmd.o 00:04:50.329 CC module/event/subsystems/scheduler/scheduler.o 00:04:50.329 CC module/event/subsystems/iobuf/iobuf.o 00:04:50.329 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:50.329 CC module/event/subsystems/fsdev/fsdev.o 00:04:50.329 CC module/event/subsystems/sock/sock.o 00:04:50.329 LIB libspdk_event_fsdev.a 00:04:50.329 LIB libspdk_event_scheduler.a 00:04:50.329 LIB libspdk_event_keyring.a 00:04:50.329 LIB libspdk_event_vhost_blk.a 00:04:50.329 LIB libspdk_event_sock.a 00:04:50.329 SO libspdk_event_fsdev.so.1.0 00:04:50.329 SO libspdk_event_scheduler.so.4.0 00:04:50.329 LIB libspdk_event_vmd.a 00:04:50.329 SO libspdk_event_keyring.so.1.0 00:04:50.329 SO libspdk_event_vhost_blk.so.3.0 00:04:50.329 LIB libspdk_event_iobuf.a 00:04:50.329 SO libspdk_event_sock.so.5.0 00:04:50.329 SO libspdk_event_vmd.so.6.0 00:04:50.329 SO libspdk_event_iobuf.so.3.0 00:04:50.329 SYMLINK libspdk_event_fsdev.so 00:04:50.329 SYMLINK libspdk_event_keyring.so 00:04:50.329 SYMLINK libspdk_event_scheduler.so 00:04:50.329 SYMLINK libspdk_event_sock.so 00:04:50.329 SYMLINK libspdk_event_vhost_blk.so 00:04:50.329 SYMLINK libspdk_event_vmd.so 00:04:50.329 SYMLINK libspdk_event_iobuf.so 00:04:50.589 CC module/event/subsystems/accel/accel.o 00:04:50.857 LIB libspdk_event_accel.a 00:04:50.857 SO libspdk_event_accel.so.6.0 00:04:50.857 SYMLINK libspdk_event_accel.so 00:04:51.116 CC module/event/subsystems/bdev/bdev.o 00:04:51.116 LIB libspdk_event_bdev.a 00:04:51.116 SO libspdk_event_bdev.so.6.0 00:04:51.374 SYMLINK libspdk_event_bdev.so 00:04:51.374 CC module/event/subsystems/ublk/ublk.o 00:04:51.374 CC module/event/subsystems/scsi/scsi.o 00:04:51.374 CC module/event/subsystems/nbd/nbd.o 00:04:51.374 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:51.374 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:51.634 LIB libspdk_event_scsi.a 00:04:51.634 LIB libspdk_event_nbd.a 00:04:51.634 LIB libspdk_event_ublk.a 00:04:51.634 SO libspdk_event_nbd.so.6.0 00:04:51.634 SO libspdk_event_scsi.so.6.0 00:04:51.634 SO libspdk_event_ublk.so.3.0 00:04:51.634 SYMLINK libspdk_event_scsi.so 00:04:51.634 SYMLINK libspdk_event_ublk.so 00:04:51.634 SYMLINK libspdk_event_nbd.so 00:04:51.634 LIB libspdk_event_nvmf.a 00:04:51.634 SO libspdk_event_nvmf.so.6.0 00:04:51.634 SYMLINK libspdk_event_nvmf.so 00:04:51.893 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:51.893 CC module/event/subsystems/iscsi/iscsi.o 00:04:51.893 LIB libspdk_event_vhost_scsi.a 00:04:51.893 LIB libspdk_event_iscsi.a 00:04:51.893 SO libspdk_event_vhost_scsi.so.3.0 00:04:51.893 SO libspdk_event_iscsi.so.6.0 00:04:51.893 SYMLINK libspdk_event_vhost_scsi.so 00:04:52.153 SYMLINK libspdk_event_iscsi.so 00:04:52.153 SO libspdk.so.6.0 00:04:52.153 SYMLINK libspdk.so 00:04:52.413 CC app/trace_record/trace_record.o 00:04:52.413 CXX app/trace/trace.o 00:04:52.413 CC app/spdk_nvme_perf/perf.o 00:04:52.413 CC app/spdk_lspci/spdk_lspci.o 00:04:52.413 CC app/nvmf_tgt/nvmf_main.o 00:04:52.413 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.413 CC app/spdk_tgt/spdk_tgt.o 00:04:52.413 CC examples/ioat/perf/perf.o 00:04:52.413 CC test/thread/poller_perf/poller_perf.o 00:04:52.413 CC examples/util/zipf/zipf.o 00:04:52.673 LINK spdk_lspci 00:04:52.673 LINK nvmf_tgt 00:04:52.673 LINK iscsi_tgt 00:04:52.673 LINK zipf 00:04:52.673 LINK poller_perf 00:04:52.673 LINK spdk_trace_record 00:04:52.673 LINK spdk_tgt 00:04:52.673 LINK ioat_perf 00:04:52.673 LINK spdk_trace 00:04:52.673 CC app/spdk_nvme_identify/identify.o 00:04:52.933 CC examples/ioat/verify/verify.o 00:04:52.933 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:52.933 CC app/spdk_nvme_discover/discovery_aer.o 00:04:52.933 CC app/spdk_top/spdk_top.o 00:04:52.933 CC app/spdk_dd/spdk_dd.o 00:04:52.933 CC test/dma/test_dma/test_dma.o 00:04:53.192 CC test/app/bdev_svc/bdev_svc.o 00:04:53.192 LINK interrupt_tgt 00:04:53.192 LINK verify 00:04:53.192 LINK spdk_nvme_discover 00:04:53.192 CC examples/thread/thread/thread_ex.o 00:04:53.192 LINK bdev_svc 00:04:53.452 LINK spdk_nvme_perf 00:04:53.452 TEST_HEADER include/spdk/accel.h 00:04:53.452 TEST_HEADER include/spdk/accel_module.h 00:04:53.452 TEST_HEADER include/spdk/assert.h 00:04:53.452 TEST_HEADER include/spdk/barrier.h 00:04:53.452 TEST_HEADER include/spdk/base64.h 00:04:53.452 TEST_HEADER include/spdk/bdev.h 00:04:53.452 TEST_HEADER include/spdk/bdev_module.h 00:04:53.452 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.452 TEST_HEADER include/spdk/bit_array.h 00:04:53.452 TEST_HEADER include/spdk/bit_pool.h 00:04:53.452 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.452 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.452 TEST_HEADER include/spdk/blobfs.h 00:04:53.452 TEST_HEADER include/spdk/blob.h 00:04:53.452 TEST_HEADER include/spdk/conf.h 00:04:53.452 TEST_HEADER include/spdk/config.h 00:04:53.452 TEST_HEADER include/spdk/cpuset.h 00:04:53.452 TEST_HEADER include/spdk/crc16.h 00:04:53.452 TEST_HEADER include/spdk/crc32.h 00:04:53.452 TEST_HEADER include/spdk/crc64.h 00:04:53.452 TEST_HEADER include/spdk/dif.h 00:04:53.452 TEST_HEADER include/spdk/dma.h 00:04:53.452 TEST_HEADER include/spdk/endian.h 00:04:53.452 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.452 LINK spdk_dd 00:04:53.452 TEST_HEADER include/spdk/env.h 00:04:53.452 TEST_HEADER include/spdk/event.h 00:04:53.452 TEST_HEADER include/spdk/fd_group.h 00:04:53.452 TEST_HEADER include/spdk/fd.h 00:04:53.452 TEST_HEADER include/spdk/file.h 00:04:53.452 TEST_HEADER include/spdk/fsdev.h 00:04:53.452 LINK thread 00:04:53.452 TEST_HEADER include/spdk/fsdev_module.h 00:04:53.452 TEST_HEADER include/spdk/ftl.h 00:04:53.452 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:53.452 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.452 TEST_HEADER include/spdk/hexlify.h 00:04:53.452 TEST_HEADER include/spdk/histogram_data.h 00:04:53.452 TEST_HEADER include/spdk/idxd.h 00:04:53.452 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.452 TEST_HEADER include/spdk/init.h 00:04:53.452 TEST_HEADER include/spdk/ioat.h 00:04:53.452 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.452 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.452 TEST_HEADER include/spdk/json.h 00:04:53.452 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.452 TEST_HEADER include/spdk/keyring.h 00:04:53.452 TEST_HEADER include/spdk/keyring_module.h 00:04:53.452 TEST_HEADER include/spdk/likely.h 00:04:53.452 CC test/event/event_perf/event_perf.o 00:04:53.452 TEST_HEADER include/spdk/log.h 00:04:53.452 TEST_HEADER include/spdk/lvol.h 00:04:53.452 TEST_HEADER include/spdk/md5.h 00:04:53.452 TEST_HEADER include/spdk/memory.h 00:04:53.452 TEST_HEADER include/spdk/mmio.h 00:04:53.452 TEST_HEADER include/spdk/nbd.h 00:04:53.452 TEST_HEADER include/spdk/net.h 00:04:53.452 TEST_HEADER include/spdk/notify.h 00:04:53.452 TEST_HEADER include/spdk/nvme.h 00:04:53.452 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.452 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.452 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.452 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.452 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.452 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.452 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.452 TEST_HEADER include/spdk/nvmf.h 00:04:53.452 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.452 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.452 TEST_HEADER include/spdk/opal.h 00:04:53.452 TEST_HEADER include/spdk/opal_spec.h 00:04:53.452 TEST_HEADER include/spdk/pci_ids.h 00:04:53.452 TEST_HEADER include/spdk/pipe.h 00:04:53.452 TEST_HEADER include/spdk/queue.h 00:04:53.452 TEST_HEADER include/spdk/reduce.h 00:04:53.452 CC test/env/mem_callbacks/mem_callbacks.o 00:04:53.452 TEST_HEADER include/spdk/rpc.h 00:04:53.452 TEST_HEADER include/spdk/scheduler.h 00:04:53.452 TEST_HEADER include/spdk/scsi.h 00:04:53.452 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.452 TEST_HEADER include/spdk/sock.h 00:04:53.452 TEST_HEADER include/spdk/stdinc.h 00:04:53.452 TEST_HEADER include/spdk/string.h 00:04:53.452 TEST_HEADER include/spdk/thread.h 00:04:53.452 TEST_HEADER include/spdk/trace.h 00:04:53.452 TEST_HEADER include/spdk/trace_parser.h 00:04:53.452 TEST_HEADER include/spdk/tree.h 00:04:53.452 TEST_HEADER include/spdk/ublk.h 00:04:53.452 TEST_HEADER include/spdk/util.h 00:04:53.452 TEST_HEADER include/spdk/uuid.h 00:04:53.452 TEST_HEADER include/spdk/version.h 00:04:53.713 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.713 CC test/app/histogram_perf/histogram_perf.o 00:04:53.713 LINK test_dma 00:04:53.713 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.713 TEST_HEADER include/spdk/vhost.h 00:04:53.713 TEST_HEADER include/spdk/vmd.h 00:04:53.713 TEST_HEADER include/spdk/xor.h 00:04:53.713 TEST_HEADER include/spdk/zipf.h 00:04:53.713 CXX test/cpp_headers/accel.o 00:04:53.713 LINK event_perf 00:04:53.713 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:53.713 CC test/app/jsoncat/jsoncat.o 00:04:53.713 CXX test/cpp_headers/accel_module.o 00:04:53.713 LINK histogram_perf 00:04:53.713 LINK spdk_nvme_identify 00:04:53.713 CC examples/sock/hello_world/hello_sock.o 00:04:53.713 CC test/event/reactor/reactor.o 00:04:53.974 LINK jsoncat 00:04:53.974 CXX test/cpp_headers/assert.o 00:04:53.974 CC test/app/stub/stub.o 00:04:53.974 LINK reactor 00:04:53.974 CC app/fio/nvme/fio_plugin.o 00:04:53.974 LINK spdk_top 00:04:53.974 CC app/vhost/vhost.o 00:04:53.974 CC test/rpc_client/rpc_client_test.o 00:04:53.974 LINK hello_sock 00:04:53.974 LINK mem_callbacks 00:04:53.974 CXX test/cpp_headers/barrier.o 00:04:53.974 LINK nvme_fuzz 00:04:53.974 LINK stub 00:04:54.235 CC test/event/reactor_perf/reactor_perf.o 00:04:54.235 LINK vhost 00:04:54.235 CXX test/cpp_headers/base64.o 00:04:54.235 CC test/env/vtophys/vtophys.o 00:04:54.235 LINK rpc_client_test 00:04:54.235 LINK reactor_perf 00:04:54.235 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:54.235 CC test/accel/dif/dif.o 00:04:54.235 CC examples/vmd/lsvmd/lsvmd.o 00:04:54.235 LINK vtophys 00:04:54.235 LINK spdk_nvme 00:04:54.235 CXX test/cpp_headers/bdev.o 00:04:54.495 CC test/blobfs/mkfs/mkfs.o 00:04:54.495 LINK lsvmd 00:04:54.495 CC test/event/app_repeat/app_repeat.o 00:04:54.495 CC app/fio/bdev/fio_plugin.o 00:04:54.495 CXX test/cpp_headers/bdev_module.o 00:04:54.495 CC examples/idxd/perf/perf.o 00:04:54.495 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.495 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:54.495 LINK app_repeat 00:04:54.495 LINK mkfs 00:04:54.755 CXX test/cpp_headers/bdev_zone.o 00:04:54.755 CC examples/vmd/led/led.o 00:04:54.755 LINK env_dpdk_post_init 00:04:54.755 CXX test/cpp_headers/bit_array.o 00:04:54.755 LINK led 00:04:54.755 CC test/event/scheduler/scheduler.o 00:04:54.755 LINK idxd_perf 00:04:54.755 LINK hello_fsdev 00:04:54.755 CXX test/cpp_headers/bit_pool.o 00:04:54.755 CC test/env/memory/memory_ut.o 00:04:55.016 CXX test/cpp_headers/blob_bdev.o 00:04:55.016 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.016 LINK dif 00:04:55.016 CC test/lvol/esnap/esnap.o 00:04:55.016 LINK spdk_bdev 00:04:55.016 CXX test/cpp_headers/blobfs.o 00:04:55.016 LINK scheduler 00:04:55.016 CXX test/cpp_headers/blob.o 00:04:55.016 CC examples/accel/perf/accel_perf.o 00:04:55.016 CC test/env/pci/pci_ut.o 00:04:55.278 CXX test/cpp_headers/conf.o 00:04:55.278 CC examples/nvme/hello_world/hello_world.o 00:04:55.278 CC examples/blob/hello_world/hello_blob.o 00:04:55.278 CC examples/blob/cli/blobcli.o 00:04:55.278 CC test/nvme/aer/aer.o 00:04:55.278 CXX test/cpp_headers/config.o 00:04:55.278 CXX test/cpp_headers/cpuset.o 00:04:55.539 LINK hello_world 00:04:55.539 LINK hello_blob 00:04:55.539 LINK accel_perf 00:04:55.539 CXX test/cpp_headers/crc16.o 00:04:55.539 LINK pci_ut 00:04:55.539 LINK aer 00:04:55.539 CXX test/cpp_headers/crc32.o 00:04:55.539 CC examples/nvme/reconnect/reconnect.o 00:04:55.798 CC test/nvme/reset/reset.o 00:04:55.799 CXX test/cpp_headers/crc64.o 00:04:55.799 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:55.799 LINK blobcli 00:04:55.799 CC test/bdev/bdevio/bdevio.o 00:04:55.799 CC examples/nvme/arbitration/arbitration.o 00:04:55.799 LINK memory_ut 00:04:55.799 CXX test/cpp_headers/dif.o 00:04:56.058 CXX test/cpp_headers/dma.o 00:04:56.058 LINK reset 00:04:56.058 LINK iscsi_fuzz 00:04:56.058 LINK reconnect 00:04:56.058 CC examples/nvme/hotplug/hotplug.o 00:04:56.058 CXX test/cpp_headers/endian.o 00:04:56.058 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:56.058 LINK arbitration 00:04:56.058 CC test/nvme/sgl/sgl.o 00:04:56.316 LINK bdevio 00:04:56.316 CXX test/cpp_headers/env_dpdk.o 00:04:56.316 LINK cmb_copy 00:04:56.316 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:56.316 CC examples/nvme/abort/abort.o 00:04:56.316 CXX test/cpp_headers/env.o 00:04:56.316 LINK nvme_manage 00:04:56.316 LINK hotplug 00:04:56.316 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:56.316 CXX test/cpp_headers/event.o 00:04:56.316 CXX test/cpp_headers/fd_group.o 00:04:56.316 CXX test/cpp_headers/fd.o 00:04:56.316 LINK sgl 00:04:56.574 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:56.574 CC test/nvme/e2edp/nvme_dp.o 00:04:56.574 CC test/nvme/overhead/overhead.o 00:04:56.574 CXX test/cpp_headers/file.o 00:04:56.574 CXX test/cpp_headers/fsdev.o 00:04:56.574 LINK abort 00:04:56.574 LINK pmr_persistence 00:04:56.574 CC test/nvme/err_injection/err_injection.o 00:04:56.574 CXX test/cpp_headers/fsdev_module.o 00:04:56.574 LINK nvme_dp 00:04:56.574 CC examples/bdev/hello_world/hello_bdev.o 00:04:56.833 CC examples/bdev/bdevperf/bdevperf.o 00:04:56.833 LINK overhead 00:04:56.833 LINK vhost_fuzz 00:04:56.833 LINK err_injection 00:04:56.833 CC test/nvme/startup/startup.o 00:04:56.833 CC test/nvme/reserve/reserve.o 00:04:56.833 CXX test/cpp_headers/ftl.o 00:04:56.833 CC test/nvme/simple_copy/simple_copy.o 00:04:56.833 LINK hello_bdev 00:04:56.833 LINK startup 00:04:56.833 CC test/nvme/boot_partition/boot_partition.o 00:04:56.833 CC test/nvme/compliance/nvme_compliance.o 00:04:56.833 CC test/nvme/connect_stress/connect_stress.o 00:04:57.092 LINK reserve 00:04:57.092 CXX test/cpp_headers/fuse_dispatcher.o 00:04:57.092 LINK simple_copy 00:04:57.092 CXX test/cpp_headers/gpt_spec.o 00:04:57.092 LINK boot_partition 00:04:57.092 LINK connect_stress 00:04:57.092 CC test/nvme/fused_ordering/fused_ordering.o 00:04:57.092 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:57.092 CXX test/cpp_headers/hexlify.o 00:04:57.092 CC test/nvme/cuse/cuse.o 00:04:57.092 CC test/nvme/fdp/fdp.o 00:04:57.353 CXX test/cpp_headers/histogram_data.o 00:04:57.353 LINK nvme_compliance 00:04:57.353 CXX test/cpp_headers/idxd.o 00:04:57.353 CXX test/cpp_headers/idxd_spec.o 00:04:57.353 LINK doorbell_aers 00:04:57.353 LINK fused_ordering 00:04:57.353 CXX test/cpp_headers/init.o 00:04:57.353 CXX test/cpp_headers/ioat.o 00:04:57.353 CXX test/cpp_headers/ioat_spec.o 00:04:57.353 CXX test/cpp_headers/iscsi_spec.o 00:04:57.353 CXX test/cpp_headers/json.o 00:04:57.353 CXX test/cpp_headers/jsonrpc.o 00:04:57.612 CXX test/cpp_headers/keyring.o 00:04:57.612 LINK bdevperf 00:04:57.612 CXX test/cpp_headers/keyring_module.o 00:04:57.612 CXX test/cpp_headers/likely.o 00:04:57.612 LINK fdp 00:04:57.612 CXX test/cpp_headers/log.o 00:04:57.612 CXX test/cpp_headers/lvol.o 00:04:57.612 CXX test/cpp_headers/md5.o 00:04:57.612 CXX test/cpp_headers/memory.o 00:04:57.612 CXX test/cpp_headers/mmio.o 00:04:57.612 CXX test/cpp_headers/nbd.o 00:04:57.612 CXX test/cpp_headers/net.o 00:04:57.612 CXX test/cpp_headers/notify.o 00:04:57.612 CXX test/cpp_headers/nvme.o 00:04:57.612 CXX test/cpp_headers/nvme_intel.o 00:04:57.612 CXX test/cpp_headers/nvme_ocssd.o 00:04:57.869 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:57.869 CXX test/cpp_headers/nvme_spec.o 00:04:57.869 CXX test/cpp_headers/nvme_zns.o 00:04:57.869 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.869 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.869 CXX test/cpp_headers/nvmf.o 00:04:57.869 CXX test/cpp_headers/nvmf_spec.o 00:04:57.869 CXX test/cpp_headers/nvmf_transport.o 00:04:57.869 CC examples/nvmf/nvmf/nvmf.o 00:04:57.869 CXX test/cpp_headers/opal.o 00:04:57.869 CXX test/cpp_headers/opal_spec.o 00:04:57.870 CXX test/cpp_headers/pci_ids.o 00:04:57.870 CXX test/cpp_headers/pipe.o 00:04:57.870 CXX test/cpp_headers/queue.o 00:04:58.128 CXX test/cpp_headers/reduce.o 00:04:58.128 CXX test/cpp_headers/rpc.o 00:04:58.128 CXX test/cpp_headers/scheduler.o 00:04:58.128 CXX test/cpp_headers/scsi.o 00:04:58.128 CXX test/cpp_headers/scsi_spec.o 00:04:58.128 CXX test/cpp_headers/sock.o 00:04:58.128 CXX test/cpp_headers/stdinc.o 00:04:58.128 CXX test/cpp_headers/string.o 00:04:58.128 CXX test/cpp_headers/thread.o 00:04:58.128 CXX test/cpp_headers/trace.o 00:04:58.128 LINK nvmf 00:04:58.128 CXX test/cpp_headers/trace_parser.o 00:04:58.128 CXX test/cpp_headers/tree.o 00:04:58.128 CXX test/cpp_headers/ublk.o 00:04:58.128 CXX test/cpp_headers/util.o 00:04:58.128 CXX test/cpp_headers/uuid.o 00:04:58.128 CXX test/cpp_headers/version.o 00:04:58.128 CXX test/cpp_headers/vfio_user_pci.o 00:04:58.128 CXX test/cpp_headers/vfio_user_spec.o 00:04:58.389 CXX test/cpp_headers/vhost.o 00:04:58.389 CXX test/cpp_headers/vmd.o 00:04:58.389 CXX test/cpp_headers/xor.o 00:04:58.389 CXX test/cpp_headers/zipf.o 00:04:58.389 LINK cuse 00:05:00.303 LINK esnap 00:05:00.564 00:05:00.564 real 1m4.170s 00:05:00.564 user 5m18.056s 00:05:00.564 sys 0m56.607s 00:05:00.564 18:15:49 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:00.564 ************************************ 00:05:00.564 END TEST make 00:05:00.564 ************************************ 00:05:00.564 18:15:49 make -- common/autotest_common.sh@10 -- $ set +x 00:05:00.564 18:15:49 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:00.564 18:15:49 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:00.564 18:15:49 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:00.564 18:15:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.564 18:15:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:00.564 18:15:49 -- pm/common@44 -- $ pid=5804 00:05:00.564 18:15:49 -- pm/common@50 -- $ kill -TERM 5804 00:05:00.564 18:15:49 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.564 18:15:49 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:00.564 18:15:49 -- pm/common@44 -- $ pid=5805 00:05:00.564 18:15:49 -- pm/common@50 -- $ kill -TERM 5805 00:05:00.564 18:15:49 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:00.564 18:15:49 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:00.564 18:15:49 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:00.826 18:15:49 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:00.826 18:15:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.826 18:15:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.826 18:15:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.826 18:15:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.826 18:15:49 -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.826 18:15:49 -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.826 18:15:49 -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.826 18:15:49 -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.826 18:15:49 -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.826 18:15:49 -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.826 18:15:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.826 18:15:49 -- scripts/common.sh@344 -- # case "$op" in 00:05:00.826 18:15:49 -- scripts/common.sh@345 -- # : 1 00:05:00.826 18:15:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.826 18:15:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.826 18:15:49 -- scripts/common.sh@365 -- # decimal 1 00:05:00.826 18:15:49 -- scripts/common.sh@353 -- # local d=1 00:05:00.826 18:15:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.826 18:15:49 -- scripts/common.sh@355 -- # echo 1 00:05:00.826 18:15:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.826 18:15:49 -- scripts/common.sh@366 -- # decimal 2 00:05:00.826 18:15:49 -- scripts/common.sh@353 -- # local d=2 00:05:00.826 18:15:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.826 18:15:49 -- scripts/common.sh@355 -- # echo 2 00:05:00.826 18:15:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.826 18:15:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.826 18:15:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.827 18:15:49 -- scripts/common.sh@368 -- # return 0 00:05:00.827 18:15:49 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.827 18:15:49 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:00.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.827 --rc genhtml_branch_coverage=1 00:05:00.827 --rc genhtml_function_coverage=1 00:05:00.827 --rc genhtml_legend=1 00:05:00.827 --rc geninfo_all_blocks=1 00:05:00.827 --rc geninfo_unexecuted_blocks=1 00:05:00.827 00:05:00.827 ' 00:05:00.827 18:15:49 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:00.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.827 --rc genhtml_branch_coverage=1 00:05:00.827 --rc genhtml_function_coverage=1 00:05:00.827 --rc genhtml_legend=1 00:05:00.827 --rc geninfo_all_blocks=1 00:05:00.827 --rc geninfo_unexecuted_blocks=1 00:05:00.827 00:05:00.827 ' 00:05:00.827 18:15:49 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:00.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.827 --rc genhtml_branch_coverage=1 00:05:00.827 --rc genhtml_function_coverage=1 00:05:00.827 --rc genhtml_legend=1 00:05:00.827 --rc geninfo_all_blocks=1 00:05:00.827 --rc geninfo_unexecuted_blocks=1 00:05:00.827 00:05:00.827 ' 00:05:00.827 18:15:49 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:00.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.827 --rc genhtml_branch_coverage=1 00:05:00.827 --rc genhtml_function_coverage=1 00:05:00.827 --rc genhtml_legend=1 00:05:00.827 --rc geninfo_all_blocks=1 00:05:00.827 --rc geninfo_unexecuted_blocks=1 00:05:00.827 00:05:00.827 ' 00:05:00.827 18:15:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.827 18:15:49 -- nvmf/common.sh@7 -- # uname -s 00:05:00.827 18:15:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.827 18:15:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.827 18:15:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.827 18:15:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.827 18:15:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.827 18:15:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.827 18:15:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.827 18:15:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.827 18:15:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.827 18:15:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.827 18:15:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea069fc8-fcc2-43f4-911f-3c99098acd58 00:05:00.827 18:15:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=ea069fc8-fcc2-43f4-911f-3c99098acd58 00:05:00.827 18:15:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.827 18:15:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.827 18:15:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.827 18:15:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.827 18:15:49 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.827 18:15:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.827 18:15:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.827 18:15:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.827 18:15:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.827 18:15:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.827 18:15:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.827 18:15:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.827 18:15:49 -- paths/export.sh@5 -- # export PATH 00:05:00.827 18:15:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.827 18:15:49 -- nvmf/common.sh@51 -- # : 0 00:05:00.827 18:15:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.827 18:15:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.827 18:15:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.827 18:15:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.827 18:15:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.827 18:15:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.827 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.827 18:15:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.827 18:15:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.827 18:15:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.827 18:15:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:00.827 18:15:49 -- spdk/autotest.sh@32 -- # uname -s 00:05:00.827 18:15:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:00.827 18:15:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:00.827 18:15:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.827 18:15:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:00.827 18:15:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:00.827 18:15:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:00.827 18:15:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:00.827 18:15:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:00.827 18:15:49 -- spdk/autotest.sh@48 -- # udevadm_pid=67929 00:05:00.827 18:15:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:00.827 18:15:49 -- pm/common@17 -- # local monitor 00:05:00.827 18:15:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:00.827 18:15:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.827 18:15:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:00.827 18:15:49 -- pm/common@25 -- # sleep 1 00:05:00.827 18:15:49 -- pm/common@21 -- # date +%s 00:05:00.827 18:15:49 -- pm/common@21 -- # date +%s 00:05:00.827 18:15:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728411349 00:05:00.827 18:15:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728411349 00:05:00.827 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728411349_collect-cpu-load.pm.log 00:05:00.827 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728411349_collect-vmstat.pm.log 00:05:01.769 18:15:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:01.769 18:15:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:01.769 18:15:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:01.769 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:01.769 18:15:50 -- spdk/autotest.sh@59 -- # create_test_list 00:05:01.769 18:15:50 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:01.769 18:15:50 -- common/autotest_common.sh@10 -- # set +x 00:05:02.069 18:15:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:02.069 18:15:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:02.069 18:15:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:02.069 18:15:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:02.069 18:15:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:02.069 18:15:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:02.069 18:15:50 -- common/autotest_common.sh@1455 -- # uname 00:05:02.069 18:15:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:02.069 18:15:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:02.069 18:15:50 -- common/autotest_common.sh@1475 -- # uname 00:05:02.069 18:15:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:02.069 18:15:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:02.069 18:15:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:02.069 lcov: LCOV version 1.15 00:05:02.069 18:15:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:17.011 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:17.011 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:35.146 18:16:21 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:35.146 18:16:21 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:35.146 18:16:21 -- common/autotest_common.sh@10 -- # set +x 00:05:35.146 18:16:21 -- spdk/autotest.sh@78 -- # rm -f 00:05:35.146 18:16:21 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.146 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.146 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:35.146 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:35.146 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:35.146 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:35.146 18:16:22 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:35.146 18:16:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:35.146 18:16:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:35.146 18:16:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:35.146 18:16:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:35.146 18:16:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:35.146 18:16:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300045 s, 34.9 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586571 s, 179 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577432 s, 182 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00567487 s, 185 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651949 s, 161 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:35.146 18:16:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:35.146 18:16:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:35.146 18:16:22 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:35.146 18:16:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:35.146 No valid GPT data, bailing 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:35.146 18:16:22 -- scripts/common.sh@394 -- # pt= 00:05:35.146 18:16:22 -- scripts/common.sh@395 -- # return 1 00:05:35.146 18:16:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:35.146 1+0 records in 00:05:35.146 1+0 records out 00:05:35.146 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465832 s, 225 MB/s 00:05:35.146 18:16:22 -- spdk/autotest.sh@105 -- # sync 00:05:35.146 18:16:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:35.146 18:16:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:35.146 18:16:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:36.091 18:16:24 -- spdk/autotest.sh@111 -- # uname -s 00:05:36.091 18:16:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:36.091 18:16:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:36.091 18:16:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:36.365 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.956 Hugepages 00:05:36.956 node hugesize free / total 00:05:36.956 node0 1048576kB 0 / 0 00:05:36.956 node0 2048kB 0 / 0 00:05:36.956 00:05:36.956 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.956 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:36.956 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:36.956 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:37.217 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:37.217 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:37.217 18:16:25 -- spdk/autotest.sh@117 -- # uname -s 00:05:37.217 18:16:25 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:37.217 18:16:25 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:37.217 18:16:25 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.790 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.362 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.362 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.362 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.362 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.362 18:16:27 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:39.306 18:16:28 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:39.306 18:16:28 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:39.306 18:16:28 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:39.306 18:16:28 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:39.306 18:16:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:39.306 18:16:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:39.306 18:16:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:39.306 18:16:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:39.306 18:16:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:39.567 18:16:28 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:39.567 18:16:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:39.567 18:16:28 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:39.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.826 Waiting for block devices as requested 00:05:40.086 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.086 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.086 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:40.086 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:45.374 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:45.374 18:16:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.374 18:16:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:45.374 18:16:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.374 18:16:33 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:45.374 18:16:33 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:45.374 18:16:33 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:45.374 18:16:33 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.374 18:16:33 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.374 18:16:33 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.374 18:16:33 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.374 18:16:33 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.374 18:16:33 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:45.374 18:16:33 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.374 18:16:33 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.374 18:16:33 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.374 18:16:33 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.374 18:16:33 -- common/autotest_common.sh@1541 -- # continue 00:05:45.374 18:16:33 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.374 18:16:33 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:45.374 18:16:33 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.374 18:16:33 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.374 18:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1541 -- # continue 00:05:45.374 18:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.374 18:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.374 18:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1541 -- # continue 00:05:45.374 18:16:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:45.374 18:16:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:45.374 18:16:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:45.374 18:16:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:45.374 18:16:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:45.374 18:16:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:45.374 18:16:34 -- common/autotest_common.sh@1541 -- # continue 00:05:45.374 18:16:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:45.374 18:16:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.374 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 18:16:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:45.374 18:16:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.374 18:16:34 -- common/autotest_common.sh@10 -- # set +x 00:05:45.374 18:16:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:45.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.519 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.519 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.519 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.519 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:46.519 18:16:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:46.519 18:16:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.519 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 18:16:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:46.778 18:16:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:46.778 18:16:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:46.778 18:16:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:46.778 18:16:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:46.778 18:16:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:46.778 18:16:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:46.778 18:16:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:46.778 18:16:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:46.778 18:16:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:46.778 18:16:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:46.778 18:16:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:46.778 18:16:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:46.778 18:16:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:46.778 18:16:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:46.778 18:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:46.778 18:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.778 18:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:46.778 18:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.778 18:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:46.778 18:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.778 18:16:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:46.778 18:16:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:46.778 18:16:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:46.778 18:16:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:46.778 18:16:35 -- common/autotest_common.sh@1570 -- # return 0 00:05:46.778 18:16:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:46.778 18:16:35 -- common/autotest_common.sh@1578 -- # return 0 00:05:46.778 18:16:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:46.778 18:16:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:46.778 18:16:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.778 18:16:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:46.778 18:16:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:46.778 18:16:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:46.778 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 18:16:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:46.778 18:16:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.778 18:16:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.778 18:16:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.778 18:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:46.778 ************************************ 00:05:46.779 START TEST env 00:05:46.779 ************************************ 00:05:46.779 18:16:35 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:46.779 * Looking for test storage... 00:05:46.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:46.779 18:16:35 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:46.779 18:16:35 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:46.779 18:16:35 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:47.037 18:16:35 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:47.037 18:16:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.037 18:16:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.037 18:16:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.037 18:16:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.037 18:16:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.037 18:16:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.037 18:16:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.037 18:16:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.037 18:16:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.037 18:16:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.037 18:16:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.037 18:16:35 env -- scripts/common.sh@344 -- # case "$op" in 00:05:47.037 18:16:35 env -- scripts/common.sh@345 -- # : 1 00:05:47.037 18:16:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.037 18:16:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.037 18:16:35 env -- scripts/common.sh@365 -- # decimal 1 00:05:47.037 18:16:35 env -- scripts/common.sh@353 -- # local d=1 00:05:47.037 18:16:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.037 18:16:35 env -- scripts/common.sh@355 -- # echo 1 00:05:47.037 18:16:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.037 18:16:35 env -- scripts/common.sh@366 -- # decimal 2 00:05:47.037 18:16:35 env -- scripts/common.sh@353 -- # local d=2 00:05:47.037 18:16:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.037 18:16:35 env -- scripts/common.sh@355 -- # echo 2 00:05:47.037 18:16:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.037 18:16:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.037 18:16:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.037 18:16:35 env -- scripts/common.sh@368 -- # return 0 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.038 --rc genhtml_branch_coverage=1 00:05:47.038 --rc genhtml_function_coverage=1 00:05:47.038 --rc genhtml_legend=1 00:05:47.038 --rc geninfo_all_blocks=1 00:05:47.038 --rc geninfo_unexecuted_blocks=1 00:05:47.038 00:05:47.038 ' 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.038 --rc genhtml_branch_coverage=1 00:05:47.038 --rc genhtml_function_coverage=1 00:05:47.038 --rc genhtml_legend=1 00:05:47.038 --rc geninfo_all_blocks=1 00:05:47.038 --rc geninfo_unexecuted_blocks=1 00:05:47.038 00:05:47.038 ' 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.038 --rc genhtml_branch_coverage=1 00:05:47.038 --rc genhtml_function_coverage=1 00:05:47.038 --rc genhtml_legend=1 00:05:47.038 --rc geninfo_all_blocks=1 00:05:47.038 --rc geninfo_unexecuted_blocks=1 00:05:47.038 00:05:47.038 ' 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:47.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.038 --rc genhtml_branch_coverage=1 00:05:47.038 --rc genhtml_function_coverage=1 00:05:47.038 --rc genhtml_legend=1 00:05:47.038 --rc geninfo_all_blocks=1 00:05:47.038 --rc geninfo_unexecuted_blocks=1 00:05:47.038 00:05:47.038 ' 00:05:47.038 18:16:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.038 18:16:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.038 18:16:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.038 ************************************ 00:05:47.038 START TEST env_memory 00:05:47.038 ************************************ 00:05:47.038 18:16:35 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:47.038 00:05:47.038 00:05:47.038 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.038 http://cunit.sourceforge.net/ 00:05:47.038 00:05:47.038 00:05:47.038 Suite: memory 00:05:47.038 Test: alloc and free memory map ...[2024-10-08 18:16:35.710934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:47.038 passed 00:05:47.038 Test: mem map translation ...[2024-10-08 18:16:35.749760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:47.038 [2024-10-08 18:16:35.749868] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:47.038 [2024-10-08 18:16:35.749975] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:47.038 [2024-10-08 18:16:35.750273] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:47.038 passed 00:05:47.038 Test: mem map registration ...[2024-10-08 18:16:35.818355] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:47.038 [2024-10-08 18:16:35.818455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:47.038 passed 00:05:47.297 Test: mem map adjacent registrations ...passed 00:05:47.297 00:05:47.297 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.297 suites 1 1 n/a 0 0 00:05:47.297 tests 4 4 4 0 0 00:05:47.297 asserts 152 152 152 0 n/a 00:05:47.297 00:05:47.297 Elapsed time = 0.233 seconds 00:05:47.297 00:05:47.297 real 0m0.270s 00:05:47.297 user 0m0.234s 00:05:47.297 sys 0m0.029s 00:05:47.297 18:16:35 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.297 18:16:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:47.297 ************************************ 00:05:47.297 END TEST env_memory 00:05:47.297 ************************************ 00:05:47.297 18:16:35 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.297 18:16:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.297 18:16:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.297 18:16:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.297 ************************************ 00:05:47.297 START TEST env_vtophys 00:05:47.297 ************************************ 00:05:47.297 18:16:35 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:47.297 EAL: lib.eal log level changed from notice to debug 00:05:47.297 EAL: Detected lcore 0 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 1 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 2 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 3 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 4 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 5 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 6 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 7 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 8 as core 0 on socket 0 00:05:47.297 EAL: Detected lcore 9 as core 0 on socket 0 00:05:47.297 EAL: Maximum logical cores by configuration: 128 00:05:47.297 EAL: Detected CPU lcores: 10 00:05:47.297 EAL: Detected NUMA nodes: 1 00:05:47.297 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:47.297 EAL: Detected shared linkage of DPDK 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:47.297 EAL: Registered [vdev] bus. 00:05:47.297 EAL: bus.vdev log level changed from disabled to notice 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:47.297 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:47.297 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:47.297 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:47.297 EAL: No shared files mode enabled, IPC will be disabled 00:05:47.297 EAL: No shared files mode enabled, IPC is disabled 00:05:47.297 EAL: Selected IOVA mode 'PA' 00:05:47.297 EAL: Probing VFIO support... 00:05:47.297 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.297 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:47.297 EAL: Ask a virtual area of 0x2e000 bytes 00:05:47.297 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:47.297 EAL: Setting up physically contiguous memory... 00:05:47.297 EAL: Setting maximum number of open files to 524288 00:05:47.297 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:47.297 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:47.297 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.297 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:47.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.297 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.297 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:47.297 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:47.297 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.297 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:47.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.297 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.297 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:47.297 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:47.297 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.297 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:47.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.297 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.297 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:47.297 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:47.297 EAL: Ask a virtual area of 0x61000 bytes 00:05:47.297 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:47.297 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:47.297 EAL: Ask a virtual area of 0x400000000 bytes 00:05:47.297 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:47.297 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:47.297 EAL: Hugepages will be freed exactly as allocated. 00:05:47.297 EAL: No shared files mode enabled, IPC is disabled 00:05:47.297 EAL: No shared files mode enabled, IPC is disabled 00:05:47.297 EAL: TSC frequency is ~2600000 KHz 00:05:47.297 EAL: Main lcore 0 is ready (tid=7fa474bc9a40;cpuset=[0]) 00:05:47.297 EAL: Trying to obtain current memory policy. 00:05:47.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.297 EAL: Restoring previous memory policy: 0 00:05:47.297 EAL: request: mp_malloc_sync 00:05:47.297 EAL: No shared files mode enabled, IPC is disabled 00:05:47.297 EAL: Heap on socket 0 was expanded by 2MB 00:05:47.297 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:47.297 EAL: No shared files mode enabled, IPC is disabled 00:05:47.297 EAL: Mem event callback 'spdk:(nil)' registered 00:05:47.297 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:47.297 00:05:47.297 00:05:47.297 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.297 http://cunit.sourceforge.net/ 00:05:47.297 00:05:47.297 00:05:47.297 Suite: components_suite 00:05:47.865 Test: vtophys_malloc_test ...passed 00:05:47.865 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:47.865 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.865 EAL: Restoring previous memory policy: 4 00:05:47.865 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.865 EAL: request: mp_malloc_sync 00:05:47.865 EAL: No shared files mode enabled, IPC is disabled 00:05:47.865 EAL: Heap on socket 0 was expanded by 4MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 4MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 6MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 6MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 10MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 10MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 18MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 18MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 34MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 34MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 66MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 66MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 130MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 130MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 258MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was shrunk by 258MB 00:05:47.866 EAL: Trying to obtain current memory policy. 00:05:47.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:47.866 EAL: Restoring previous memory policy: 4 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:47.866 EAL: request: mp_malloc_sync 00:05:47.866 EAL: No shared files mode enabled, IPC is disabled 00:05:47.866 EAL: Heap on socket 0 was expanded by 514MB 00:05:47.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.125 EAL: request: mp_malloc_sync 00:05:48.125 EAL: No shared files mode enabled, IPC is disabled 00:05:48.125 EAL: Heap on socket 0 was shrunk by 514MB 00:05:48.125 EAL: Trying to obtain current memory policy. 00:05:48.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:48.125 EAL: Restoring previous memory policy: 4 00:05:48.125 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.125 EAL: request: mp_malloc_sync 00:05:48.125 EAL: No shared files mode enabled, IPC is disabled 00:05:48.125 EAL: Heap on socket 0 was expanded by 1026MB 00:05:48.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.383 passed 00:05:48.383 00:05:48.383 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.383 suites 1 1 n/a 0 0 00:05:48.383 tests 2 2 2 0 0 00:05:48.383 asserts 5540 5540 5540 0 n/a 00:05:48.383 00:05:48.383 Elapsed time = 0.938 seconds 00:05:48.383 EAL: request: mp_malloc_sync 00:05:48.383 EAL: No shared files mode enabled, IPC is disabled 00:05:48.383 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:48.383 EAL: Calling mem event callback 'spdk:(nil)' 00:05:48.383 EAL: request: mp_malloc_sync 00:05:48.383 EAL: No shared files mode enabled, IPC is disabled 00:05:48.383 EAL: Heap on socket 0 was shrunk by 2MB 00:05:48.383 EAL: No shared files mode enabled, IPC is disabled 00:05:48.383 EAL: No shared files mode enabled, IPC is disabled 00:05:48.383 EAL: No shared files mode enabled, IPC is disabled 00:05:48.383 00:05:48.383 real 0m1.162s 00:05:48.383 user 0m0.476s 00:05:48.383 sys 0m0.554s 00:05:48.383 18:16:37 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.383 18:16:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:48.383 ************************************ 00:05:48.383 END TEST env_vtophys 00:05:48.383 ************************************ 00:05:48.384 18:16:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.384 18:16:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.384 18:16:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.384 18:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.384 ************************************ 00:05:48.384 START TEST env_pci 00:05:48.384 ************************************ 00:05:48.384 18:16:37 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:48.384 00:05:48.384 00:05:48.384 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.384 http://cunit.sourceforge.net/ 00:05:48.384 00:05:48.384 00:05:48.384 Suite: pci 00:05:48.384 Test: pci_hook ...[2024-10-08 18:16:37.193167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70668 has claimed it 00:05:48.384 passed 00:05:48.384 00:05:48.384 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.384 suites 1 1 n/a 0 0 00:05:48.384 tests 1 1 1 0 0 00:05:48.384 asserts 25 25 25 0 n/a 00:05:48.384 00:05:48.384 Elapsed time = 0.004 seconds 00:05:48.384 EAL: Cannot find device (10000:00:01.0) 00:05:48.384 EAL: Failed to attach device on primary process 00:05:48.384 00:05:48.384 real 0m0.057s 00:05:48.384 user 0m0.029s 00:05:48.384 sys 0m0.028s 00:05:48.642 18:16:37 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.642 ************************************ 00:05:48.642 END TEST env_pci 00:05:48.642 ************************************ 00:05:48.642 18:16:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:48.642 18:16:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:48.642 18:16:37 env -- env/env.sh@15 -- # uname 00:05:48.642 18:16:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:48.642 18:16:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:48.642 18:16:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.642 18:16:37 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:48.642 18:16:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.642 18:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.642 ************************************ 00:05:48.642 START TEST env_dpdk_post_init 00:05:48.642 ************************************ 00:05:48.642 18:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:48.642 EAL: Detected CPU lcores: 10 00:05:48.642 EAL: Detected NUMA nodes: 1 00:05:48.642 EAL: Detected shared linkage of DPDK 00:05:48.642 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.642 EAL: Selected IOVA mode 'PA' 00:05:48.901 Starting DPDK initialization... 00:05:48.901 Starting SPDK post initialization... 00:05:48.901 SPDK NVMe probe 00:05:48.901 Attaching to 0000:00:10.0 00:05:48.901 Attaching to 0000:00:11.0 00:05:48.901 Attaching to 0000:00:12.0 00:05:48.901 Attaching to 0000:00:13.0 00:05:48.901 Attached to 0000:00:10.0 00:05:48.901 Attached to 0000:00:11.0 00:05:48.901 Attached to 0000:00:13.0 00:05:48.901 Attached to 0000:00:12.0 00:05:48.901 Cleaning up... 00:05:48.901 00:05:48.901 real 0m0.216s 00:05:48.901 user 0m0.060s 00:05:48.901 sys 0m0.059s 00:05:48.901 18:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.901 18:16:37 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.901 ************************************ 00:05:48.901 END TEST env_dpdk_post_init 00:05:48.901 ************************************ 00:05:48.901 18:16:37 env -- env/env.sh@26 -- # uname 00:05:48.901 18:16:37 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:48.901 18:16:37 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.901 18:16:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.901 18:16:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.901 18:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:48.901 ************************************ 00:05:48.901 START TEST env_mem_callbacks 00:05:48.901 ************************************ 00:05:48.901 18:16:37 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:48.901 EAL: Detected CPU lcores: 10 00:05:48.901 EAL: Detected NUMA nodes: 1 00:05:48.901 EAL: Detected shared linkage of DPDK 00:05:48.901 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:48.901 EAL: Selected IOVA mode 'PA' 00:05:48.901 00:05:48.901 00:05:48.901 CUnit - A unit testing framework for C - Version 2.1-3 00:05:48.901 http://cunit.sourceforge.net/ 00:05:48.901 00:05:48.901 00:05:48.901 Suite: memory 00:05:48.901 Test: test ... 00:05:48.901 register 0x200000200000 2097152 00:05:48.901 malloc 3145728 00:05:48.901 register 0x200000400000 4194304 00:05:48.901 buf 0x200000500000 len 3145728 PASSED 00:05:48.901 malloc 64 00:05:48.901 buf 0x2000004fff40 len 64 PASSED 00:05:48.901 malloc 4194304 00:05:48.901 register 0x200000800000 6291456 00:05:48.901 buf 0x200000a00000 len 4194304 PASSED 00:05:48.901 free 0x200000500000 3145728 00:05:48.901 free 0x2000004fff40 64 00:05:48.901 unregister 0x200000400000 4194304 PASSED 00:05:48.901 free 0x200000a00000 4194304 00:05:48.901 unregister 0x200000800000 6291456 PASSED 00:05:48.901 malloc 8388608 00:05:48.901 register 0x200000400000 10485760 00:05:48.901 buf 0x200000600000 len 8388608 PASSED 00:05:48.901 free 0x200000600000 8388608 00:05:48.901 unregister 0x200000400000 10485760 PASSED 00:05:48.901 passed 00:05:48.901 00:05:48.901 Run Summary: Type Total Ran Passed Failed Inactive 00:05:48.901 suites 1 1 n/a 0 0 00:05:48.901 tests 1 1 1 0 0 00:05:48.901 asserts 15 15 15 0 n/a 00:05:48.901 00:05:48.901 Elapsed time = 0.011 seconds 00:05:48.901 00:05:48.901 real 0m0.170s 00:05:48.901 user 0m0.025s 00:05:48.901 sys 0m0.042s 00:05:48.901 18:16:37 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.901 18:16:37 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:48.901 ************************************ 00:05:48.901 END TEST env_mem_callbacks 00:05:48.901 ************************************ 00:05:49.162 00:05:49.162 real 0m2.261s 00:05:49.162 user 0m0.976s 00:05:49.162 sys 0m0.910s 00:05:49.162 18:16:37 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.162 18:16:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:49.162 ************************************ 00:05:49.162 END TEST env 00:05:49.162 ************************************ 00:05:49.162 18:16:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.162 18:16:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.162 18:16:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.162 18:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:49.162 ************************************ 00:05:49.162 START TEST rpc 00:05:49.162 ************************************ 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:49.162 * Looking for test storage... 00:05:49.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.162 18:16:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.162 18:16:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.162 18:16:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.162 18:16:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.162 18:16:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.162 18:16:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:49.162 18:16:37 rpc -- scripts/common.sh@345 -- # : 1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.162 18:16:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.162 18:16:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@353 -- # local d=1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.162 18:16:37 rpc -- scripts/common.sh@355 -- # echo 1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.162 18:16:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@353 -- # local d=2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.162 18:16:37 rpc -- scripts/common.sh@355 -- # echo 2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.162 18:16:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.162 18:16:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.162 18:16:37 rpc -- scripts/common.sh@368 -- # return 0 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.162 --rc geninfo_unexecuted_blocks=1 00:05:49.162 00:05:49.162 ' 00:05:49.162 18:16:37 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:49.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.162 --rc genhtml_branch_coverage=1 00:05:49.162 --rc genhtml_function_coverage=1 00:05:49.162 --rc genhtml_legend=1 00:05:49.162 --rc geninfo_all_blocks=1 00:05:49.163 --rc geninfo_unexecuted_blocks=1 00:05:49.163 00:05:49.163 ' 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:49.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.163 --rc genhtml_branch_coverage=1 00:05:49.163 --rc genhtml_function_coverage=1 00:05:49.163 --rc genhtml_legend=1 00:05:49.163 --rc geninfo_all_blocks=1 00:05:49.163 --rc geninfo_unexecuted_blocks=1 00:05:49.163 00:05:49.163 ' 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:49.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.163 --rc genhtml_branch_coverage=1 00:05:49.163 --rc genhtml_function_coverage=1 00:05:49.163 --rc genhtml_legend=1 00:05:49.163 --rc geninfo_all_blocks=1 00:05:49.163 --rc geninfo_unexecuted_blocks=1 00:05:49.163 00:05:49.163 ' 00:05:49.163 18:16:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70790 00:05:49.163 18:16:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.163 18:16:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70790 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@831 -- # '[' -z 70790 ']' 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.163 18:16:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.163 18:16:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.421 [2024-10-08 18:16:38.137809] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:49.422 [2024-10-08 18:16:38.137955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70790 ] 00:05:49.422 [2024-10-08 18:16:38.269377] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:49.679 [2024-10-08 18:16:38.288489] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.679 [2024-10-08 18:16:38.319911] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:49.679 [2024-10-08 18:16:38.319959] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70790' to capture a snapshot of events at runtime. 00:05:49.679 [2024-10-08 18:16:38.319969] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:49.679 [2024-10-08 18:16:38.319983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:49.679 [2024-10-08 18:16:38.319990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70790 for offline analysis/debug. 00:05:49.679 [2024-10-08 18:16:38.320289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.245 18:16:38 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.245 18:16:38 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:50.245 18:16:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.245 18:16:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:50.245 18:16:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:50.245 18:16:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:50.245 18:16:38 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.245 18:16:38 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.245 18:16:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.245 ************************************ 00:05:50.245 START TEST rpc_integrity 00:05:50.245 ************************************ 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.245 { 00:05:50.245 "name": "Malloc0", 00:05:50.245 "aliases": [ 00:05:50.245 "b0f37b4b-85c0-404d-970b-a357b6eec0e5" 00:05:50.245 ], 00:05:50.245 "product_name": "Malloc disk", 00:05:50.245 "block_size": 512, 00:05:50.245 "num_blocks": 16384, 00:05:50.245 "uuid": "b0f37b4b-85c0-404d-970b-a357b6eec0e5", 00:05:50.245 "assigned_rate_limits": { 00:05:50.245 "rw_ios_per_sec": 0, 00:05:50.245 "rw_mbytes_per_sec": 0, 00:05:50.245 "r_mbytes_per_sec": 0, 00:05:50.245 "w_mbytes_per_sec": 0 00:05:50.245 }, 00:05:50.245 "claimed": false, 00:05:50.245 "zoned": false, 00:05:50.245 "supported_io_types": { 00:05:50.245 "read": true, 00:05:50.245 "write": true, 00:05:50.245 "unmap": true, 00:05:50.245 "flush": true, 00:05:50.245 "reset": true, 00:05:50.245 "nvme_admin": false, 00:05:50.245 "nvme_io": false, 00:05:50.245 "nvme_io_md": false, 00:05:50.245 "write_zeroes": true, 00:05:50.245 "zcopy": true, 00:05:50.245 "get_zone_info": false, 00:05:50.245 "zone_management": false, 00:05:50.245 "zone_append": false, 00:05:50.245 "compare": false, 00:05:50.245 "compare_and_write": false, 00:05:50.245 "abort": true, 00:05:50.245 "seek_hole": false, 00:05:50.245 "seek_data": false, 00:05:50.245 "copy": true, 00:05:50.245 "nvme_iov_md": false 00:05:50.245 }, 00:05:50.245 "memory_domains": [ 00:05:50.245 { 00:05:50.245 "dma_device_id": "system", 00:05:50.245 "dma_device_type": 1 00:05:50.245 }, 00:05:50.245 { 00:05:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.245 "dma_device_type": 2 00:05:50.245 } 00:05:50.245 ], 00:05:50.245 "driver_specific": {} 00:05:50.245 } 00:05:50.245 ]' 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.245 18:16:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.245 18:16:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.245 [2024-10-08 18:16:39.000003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:50.245 [2024-10-08 18:16:39.000064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.245 [2024-10-08 18:16:39.000088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:50.245 [2024-10-08 18:16:39.000099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.246 [2024-10-08 18:16:39.002341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.246 [2024-10-08 18:16:39.002383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.246 Passthru0 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.246 { 00:05:50.246 "name": "Malloc0", 00:05:50.246 "aliases": [ 00:05:50.246 "b0f37b4b-85c0-404d-970b-a357b6eec0e5" 00:05:50.246 ], 00:05:50.246 "product_name": "Malloc disk", 00:05:50.246 "block_size": 512, 00:05:50.246 "num_blocks": 16384, 00:05:50.246 "uuid": "b0f37b4b-85c0-404d-970b-a357b6eec0e5", 00:05:50.246 "assigned_rate_limits": { 00:05:50.246 "rw_ios_per_sec": 0, 00:05:50.246 "rw_mbytes_per_sec": 0, 00:05:50.246 "r_mbytes_per_sec": 0, 00:05:50.246 "w_mbytes_per_sec": 0 00:05:50.246 }, 00:05:50.246 "claimed": true, 00:05:50.246 "claim_type": "exclusive_write", 00:05:50.246 "zoned": false, 00:05:50.246 "supported_io_types": { 00:05:50.246 "read": true, 00:05:50.246 "write": true, 00:05:50.246 "unmap": true, 00:05:50.246 "flush": true, 00:05:50.246 "reset": true, 00:05:50.246 "nvme_admin": false, 00:05:50.246 "nvme_io": false, 00:05:50.246 "nvme_io_md": false, 00:05:50.246 "write_zeroes": true, 00:05:50.246 "zcopy": true, 00:05:50.246 "get_zone_info": false, 00:05:50.246 "zone_management": false, 00:05:50.246 "zone_append": false, 00:05:50.246 "compare": false, 00:05:50.246 "compare_and_write": false, 00:05:50.246 "abort": true, 00:05:50.246 "seek_hole": false, 00:05:50.246 "seek_data": false, 00:05:50.246 "copy": true, 00:05:50.246 "nvme_iov_md": false 00:05:50.246 }, 00:05:50.246 "memory_domains": [ 00:05:50.246 { 00:05:50.246 "dma_device_id": "system", 00:05:50.246 "dma_device_type": 1 00:05:50.246 }, 00:05:50.246 { 00:05:50.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.246 "dma_device_type": 2 00:05:50.246 } 00:05:50.246 ], 00:05:50.246 "driver_specific": {} 00:05:50.246 }, 00:05:50.246 { 00:05:50.246 "name": "Passthru0", 00:05:50.246 "aliases": [ 00:05:50.246 "54e7800b-8f89-520a-896a-fa747cd49126" 00:05:50.246 ], 00:05:50.246 "product_name": "passthru", 00:05:50.246 "block_size": 512, 00:05:50.246 "num_blocks": 16384, 00:05:50.246 "uuid": "54e7800b-8f89-520a-896a-fa747cd49126", 00:05:50.246 "assigned_rate_limits": { 00:05:50.246 "rw_ios_per_sec": 0, 00:05:50.246 "rw_mbytes_per_sec": 0, 00:05:50.246 "r_mbytes_per_sec": 0, 00:05:50.246 "w_mbytes_per_sec": 0 00:05:50.246 }, 00:05:50.246 "claimed": false, 00:05:50.246 "zoned": false, 00:05:50.246 "supported_io_types": { 00:05:50.246 "read": true, 00:05:50.246 "write": true, 00:05:50.246 "unmap": true, 00:05:50.246 "flush": true, 00:05:50.246 "reset": true, 00:05:50.246 "nvme_admin": false, 00:05:50.246 "nvme_io": false, 00:05:50.246 "nvme_io_md": false, 00:05:50.246 "write_zeroes": true, 00:05:50.246 "zcopy": true, 00:05:50.246 "get_zone_info": false, 00:05:50.246 "zone_management": false, 00:05:50.246 "zone_append": false, 00:05:50.246 "compare": false, 00:05:50.246 "compare_and_write": false, 00:05:50.246 "abort": true, 00:05:50.246 "seek_hole": false, 00:05:50.246 "seek_data": false, 00:05:50.246 "copy": true, 00:05:50.246 "nvme_iov_md": false 00:05:50.246 }, 00:05:50.246 "memory_domains": [ 00:05:50.246 { 00:05:50.246 "dma_device_id": "system", 00:05:50.246 "dma_device_type": 1 00:05:50.246 }, 00:05:50.246 { 00:05:50.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.246 "dma_device_type": 2 00:05:50.246 } 00:05:50.246 ], 00:05:50.246 "driver_specific": { 00:05:50.246 "passthru": { 00:05:50.246 "name": "Passthru0", 00:05:50.246 "base_bdev_name": "Malloc0" 00:05:50.246 } 00:05:50.246 } 00:05:50.246 } 00:05:50.246 ]' 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.246 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.246 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.507 18:16:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.507 00:05:50.507 real 0m0.226s 00:05:50.507 user 0m0.127s 00:05:50.507 sys 0m0.034s 00:05:50.507 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 ************************************ 00:05:50.507 END TEST rpc_integrity 00:05:50.507 ************************************ 00:05:50.507 18:16:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 ************************************ 00:05:50.507 START TEST rpc_plugins 00:05:50.507 ************************************ 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:50.507 { 00:05:50.507 "name": "Malloc1", 00:05:50.507 "aliases": [ 00:05:50.507 "371cdf0f-660b-45d3-8e4e-f6cd2344fd43" 00:05:50.507 ], 00:05:50.507 "product_name": "Malloc disk", 00:05:50.507 "block_size": 4096, 00:05:50.507 "num_blocks": 256, 00:05:50.507 "uuid": "371cdf0f-660b-45d3-8e4e-f6cd2344fd43", 00:05:50.507 "assigned_rate_limits": { 00:05:50.507 "rw_ios_per_sec": 0, 00:05:50.507 "rw_mbytes_per_sec": 0, 00:05:50.507 "r_mbytes_per_sec": 0, 00:05:50.507 "w_mbytes_per_sec": 0 00:05:50.507 }, 00:05:50.507 "claimed": false, 00:05:50.507 "zoned": false, 00:05:50.507 "supported_io_types": { 00:05:50.507 "read": true, 00:05:50.507 "write": true, 00:05:50.507 "unmap": true, 00:05:50.507 "flush": true, 00:05:50.507 "reset": true, 00:05:50.507 "nvme_admin": false, 00:05:50.507 "nvme_io": false, 00:05:50.507 "nvme_io_md": false, 00:05:50.507 "write_zeroes": true, 00:05:50.507 "zcopy": true, 00:05:50.507 "get_zone_info": false, 00:05:50.507 "zone_management": false, 00:05:50.507 "zone_append": false, 00:05:50.507 "compare": false, 00:05:50.507 "compare_and_write": false, 00:05:50.507 "abort": true, 00:05:50.507 "seek_hole": false, 00:05:50.507 "seek_data": false, 00:05:50.507 "copy": true, 00:05:50.507 "nvme_iov_md": false 00:05:50.507 }, 00:05:50.507 "memory_domains": [ 00:05:50.507 { 00:05:50.507 "dma_device_id": "system", 00:05:50.507 "dma_device_type": 1 00:05:50.507 }, 00:05:50.507 { 00:05:50.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.507 "dma_device_type": 2 00:05:50.507 } 00:05:50.507 ], 00:05:50.507 "driver_specific": {} 00:05:50.507 } 00:05:50.507 ]' 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:50.507 18:16:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.507 00:05:50.507 real 0m0.120s 00:05:50.507 user 0m0.064s 00:05:50.507 sys 0m0.021s 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 ************************************ 00:05:50.507 END TEST rpc_plugins 00:05:50.507 ************************************ 00:05:50.507 18:16:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.507 18:16:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.507 ************************************ 00:05:50.507 START TEST rpc_trace_cmd_test 00:05:50.507 ************************************ 00:05:50.507 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:50.507 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:50.507 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.507 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.507 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:50.766 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70790", 00:05:50.766 "tpoint_group_mask": "0x8", 00:05:50.766 "iscsi_conn": { 00:05:50.766 "mask": "0x2", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "scsi": { 00:05:50.766 "mask": "0x4", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "bdev": { 00:05:50.766 "mask": "0x8", 00:05:50.766 "tpoint_mask": "0xffffffffffffffff" 00:05:50.766 }, 00:05:50.766 "nvmf_rdma": { 00:05:50.766 "mask": "0x10", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "nvmf_tcp": { 00:05:50.766 "mask": "0x20", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "ftl": { 00:05:50.766 "mask": "0x40", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "blobfs": { 00:05:50.766 "mask": "0x80", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "dsa": { 00:05:50.766 "mask": "0x200", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "thread": { 00:05:50.766 "mask": "0x400", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "nvme_pcie": { 00:05:50.766 "mask": "0x800", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "iaa": { 00:05:50.766 "mask": "0x1000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "nvme_tcp": { 00:05:50.766 "mask": "0x2000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "bdev_nvme": { 00:05:50.766 "mask": "0x4000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "sock": { 00:05:50.766 "mask": "0x8000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "blob": { 00:05:50.766 "mask": "0x10000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "bdev_raid": { 00:05:50.766 "mask": "0x20000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 }, 00:05:50.766 "scheduler": { 00:05:50.766 "mask": "0x40000", 00:05:50.766 "tpoint_mask": "0x0" 00:05:50.766 } 00:05:50.766 }' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.766 00:05:50.766 real 0m0.166s 00:05:50.766 user 0m0.132s 00:05:50.766 sys 0m0.023s 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.766 18:16:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 ************************************ 00:05:50.766 END TEST rpc_trace_cmd_test 00:05:50.766 ************************************ 00:05:50.766 18:16:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.766 18:16:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.766 18:16:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.766 18:16:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.766 18:16:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.766 18:16:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 ************************************ 00:05:50.766 START TEST rpc_daemon_integrity 00:05:50.766 ************************************ 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.766 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:51.026 { 00:05:51.026 "name": "Malloc2", 00:05:51.026 "aliases": [ 00:05:51.026 "70ced1fe-b9da-4bce-b381-dcb4a2751db0" 00:05:51.026 ], 00:05:51.026 "product_name": "Malloc disk", 00:05:51.026 "block_size": 512, 00:05:51.026 "num_blocks": 16384, 00:05:51.026 "uuid": "70ced1fe-b9da-4bce-b381-dcb4a2751db0", 00:05:51.026 "assigned_rate_limits": { 00:05:51.026 "rw_ios_per_sec": 0, 00:05:51.026 "rw_mbytes_per_sec": 0, 00:05:51.026 "r_mbytes_per_sec": 0, 00:05:51.026 "w_mbytes_per_sec": 0 00:05:51.026 }, 00:05:51.026 "claimed": false, 00:05:51.026 "zoned": false, 00:05:51.026 "supported_io_types": { 00:05:51.026 "read": true, 00:05:51.026 "write": true, 00:05:51.026 "unmap": true, 00:05:51.026 "flush": true, 00:05:51.026 "reset": true, 00:05:51.026 "nvme_admin": false, 00:05:51.026 "nvme_io": false, 00:05:51.026 "nvme_io_md": false, 00:05:51.026 "write_zeroes": true, 00:05:51.026 "zcopy": true, 00:05:51.026 "get_zone_info": false, 00:05:51.026 "zone_management": false, 00:05:51.026 "zone_append": false, 00:05:51.026 "compare": false, 00:05:51.026 "compare_and_write": false, 00:05:51.026 "abort": true, 00:05:51.026 "seek_hole": false, 00:05:51.026 "seek_data": false, 00:05:51.026 "copy": true, 00:05:51.026 "nvme_iov_md": false 00:05:51.026 }, 00:05:51.026 "memory_domains": [ 00:05:51.026 { 00:05:51.026 "dma_device_id": "system", 00:05:51.026 "dma_device_type": 1 00:05:51.026 }, 00:05:51.026 { 00:05:51.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.026 "dma_device_type": 2 00:05:51.026 } 00:05:51.026 ], 00:05:51.026 "driver_specific": {} 00:05:51.026 } 00:05:51.026 ]' 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.026 [2024-10-08 18:16:39.664368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:51.026 [2024-10-08 18:16:39.664426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:51.026 [2024-10-08 18:16:39.664445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:51.026 [2024-10-08 18:16:39.664456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:51.026 [2024-10-08 18:16:39.666669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:51.026 [2024-10-08 18:16:39.666706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:51.026 Passthru0 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.026 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:51.026 { 00:05:51.026 "name": "Malloc2", 00:05:51.026 "aliases": [ 00:05:51.026 "70ced1fe-b9da-4bce-b381-dcb4a2751db0" 00:05:51.026 ], 00:05:51.026 "product_name": "Malloc disk", 00:05:51.026 "block_size": 512, 00:05:51.026 "num_blocks": 16384, 00:05:51.026 "uuid": "70ced1fe-b9da-4bce-b381-dcb4a2751db0", 00:05:51.026 "assigned_rate_limits": { 00:05:51.026 "rw_ios_per_sec": 0, 00:05:51.026 "rw_mbytes_per_sec": 0, 00:05:51.026 "r_mbytes_per_sec": 0, 00:05:51.026 "w_mbytes_per_sec": 0 00:05:51.026 }, 00:05:51.026 "claimed": true, 00:05:51.026 "claim_type": "exclusive_write", 00:05:51.026 "zoned": false, 00:05:51.026 "supported_io_types": { 00:05:51.026 "read": true, 00:05:51.026 "write": true, 00:05:51.026 "unmap": true, 00:05:51.026 "flush": true, 00:05:51.026 "reset": true, 00:05:51.026 "nvme_admin": false, 00:05:51.026 "nvme_io": false, 00:05:51.026 "nvme_io_md": false, 00:05:51.026 "write_zeroes": true, 00:05:51.027 "zcopy": true, 00:05:51.027 "get_zone_info": false, 00:05:51.027 "zone_management": false, 00:05:51.027 "zone_append": false, 00:05:51.027 "compare": false, 00:05:51.027 "compare_and_write": false, 00:05:51.027 "abort": true, 00:05:51.027 "seek_hole": false, 00:05:51.027 "seek_data": false, 00:05:51.027 "copy": true, 00:05:51.027 "nvme_iov_md": false 00:05:51.027 }, 00:05:51.027 "memory_domains": [ 00:05:51.027 { 00:05:51.027 "dma_device_id": "system", 00:05:51.027 "dma_device_type": 1 00:05:51.027 }, 00:05:51.027 { 00:05:51.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.027 "dma_device_type": 2 00:05:51.027 } 00:05:51.027 ], 00:05:51.027 "driver_specific": {} 00:05:51.027 }, 00:05:51.027 { 00:05:51.027 "name": "Passthru0", 00:05:51.027 "aliases": [ 00:05:51.027 "27714abc-414f-5e9f-adeb-8d0840d64d92" 00:05:51.027 ], 00:05:51.027 "product_name": "passthru", 00:05:51.027 "block_size": 512, 00:05:51.027 "num_blocks": 16384, 00:05:51.027 "uuid": "27714abc-414f-5e9f-adeb-8d0840d64d92", 00:05:51.027 "assigned_rate_limits": { 00:05:51.027 "rw_ios_per_sec": 0, 00:05:51.027 "rw_mbytes_per_sec": 0, 00:05:51.027 "r_mbytes_per_sec": 0, 00:05:51.027 "w_mbytes_per_sec": 0 00:05:51.027 }, 00:05:51.027 "claimed": false, 00:05:51.027 "zoned": false, 00:05:51.027 "supported_io_types": { 00:05:51.027 "read": true, 00:05:51.027 "write": true, 00:05:51.027 "unmap": true, 00:05:51.027 "flush": true, 00:05:51.027 "reset": true, 00:05:51.027 "nvme_admin": false, 00:05:51.027 "nvme_io": false, 00:05:51.027 "nvme_io_md": false, 00:05:51.027 "write_zeroes": true, 00:05:51.027 "zcopy": true, 00:05:51.027 "get_zone_info": false, 00:05:51.027 "zone_management": false, 00:05:51.027 "zone_append": false, 00:05:51.027 "compare": false, 00:05:51.027 "compare_and_write": false, 00:05:51.027 "abort": true, 00:05:51.027 "seek_hole": false, 00:05:51.027 "seek_data": false, 00:05:51.027 "copy": true, 00:05:51.027 "nvme_iov_md": false 00:05:51.027 }, 00:05:51.027 "memory_domains": [ 00:05:51.027 { 00:05:51.027 "dma_device_id": "system", 00:05:51.027 "dma_device_type": 1 00:05:51.027 }, 00:05:51.027 { 00:05:51.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:51.027 "dma_device_type": 2 00:05:51.027 } 00:05:51.027 ], 00:05:51.027 "driver_specific": { 00:05:51.027 "passthru": { 00:05:51.027 "name": "Passthru0", 00:05:51.027 "base_bdev_name": "Malloc2" 00:05:51.027 } 00:05:51.027 } 00:05:51.027 } 00:05:51.027 ]' 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:51.027 00:05:51.027 real 0m0.218s 00:05:51.027 user 0m0.127s 00:05:51.027 sys 0m0.031s 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.027 18:16:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:51.027 ************************************ 00:05:51.027 END TEST rpc_daemon_integrity 00:05:51.027 ************************************ 00:05:51.027 18:16:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:51.027 18:16:39 rpc -- rpc/rpc.sh@84 -- # killprocess 70790 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@950 -- # '[' -z 70790 ']' 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@954 -- # kill -0 70790 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@955 -- # uname 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70790 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:51.027 killing process with pid 70790 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70790' 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@969 -- # kill 70790 00:05:51.027 18:16:39 rpc -- common/autotest_common.sh@974 -- # wait 70790 00:05:51.286 00:05:51.286 real 0m2.277s 00:05:51.286 user 0m2.682s 00:05:51.286 sys 0m0.605s 00:05:51.286 18:16:40 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.286 18:16:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.286 ************************************ 00:05:51.286 END TEST rpc 00:05:51.286 ************************************ 00:05:51.286 18:16:40 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:51.286 18:16:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.286 18:16:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.286 18:16:40 -- common/autotest_common.sh@10 -- # set +x 00:05:51.286 ************************************ 00:05:51.286 START TEST skip_rpc 00:05:51.286 ************************************ 00:05:51.286 18:16:40 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:51.547 * Looking for test storage... 00:05:51.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.547 18:16:40 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.547 --rc genhtml_branch_coverage=1 00:05:51.547 --rc genhtml_function_coverage=1 00:05:51.547 --rc genhtml_legend=1 00:05:51.547 --rc geninfo_all_blocks=1 00:05:51.547 --rc geninfo_unexecuted_blocks=1 00:05:51.547 00:05:51.547 ' 00:05:51.547 18:16:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:51.547 18:16:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:51.547 18:16:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.547 18:16:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.547 ************************************ 00:05:51.547 START TEST skip_rpc 00:05:51.547 ************************************ 00:05:51.547 18:16:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:51.547 18:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70991 00:05:51.547 18:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.547 18:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:51.547 18:16:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:51.547 [2024-10-08 18:16:40.340793] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:51.547 [2024-10-08 18:16:40.340908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:05:51.805 [2024-10-08 18:16:40.468992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:51.805 [2024-10-08 18:16:40.490591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.805 [2024-10-08 18:16:40.521944] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70991 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70991 ']' 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70991 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70991 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.070 killing process with pid 70991 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70991' 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70991 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70991 00:05:57.070 00:05:57.070 real 0m5.273s 00:05:57.070 user 0m4.943s 00:05:57.070 sys 0m0.233s 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.070 18:16:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.070 ************************************ 00:05:57.070 END TEST skip_rpc 00:05:57.070 ************************************ 00:05:57.070 18:16:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:57.070 18:16:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:57.070 18:16:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.070 18:16:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.070 ************************************ 00:05:57.070 START TEST skip_rpc_with_json 00:05:57.070 ************************************ 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71073 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71073 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71073 ']' 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.070 18:16:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.070 [2024-10-08 18:16:45.657245] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:05:57.070 [2024-10-08 18:16:45.657366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71073 ] 00:05:57.070 [2024-10-08 18:16:45.794983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:57.070 [2024-10-08 18:16:45.813557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.070 [2024-10-08 18:16:45.845148] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.006 [2024-10-08 18:16:46.506345] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:58.006 request: 00:05:58.006 { 00:05:58.006 "trtype": "tcp", 00:05:58.006 "method": "nvmf_get_transports", 00:05:58.006 "req_id": 1 00:05:58.006 } 00:05:58.006 Got JSON-RPC error response 00:05:58.006 response: 00:05:58.006 { 00:05:58.006 "code": -19, 00:05:58.006 "message": "No such device" 00:05:58.006 } 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.006 [2024-10-08 18:16:46.514442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.006 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:58.006 { 00:05:58.006 "subsystems": [ 00:05:58.006 { 00:05:58.006 "subsystem": "fsdev", 00:05:58.006 "config": [ 00:05:58.006 { 00:05:58.006 "method": "fsdev_set_opts", 00:05:58.006 "params": { 00:05:58.006 "fsdev_io_pool_size": 65535, 00:05:58.006 "fsdev_io_cache_size": 256 00:05:58.006 } 00:05:58.006 } 00:05:58.006 ] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "keyring", 00:05:58.006 "config": [] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "iobuf", 00:05:58.006 "config": [ 00:05:58.006 { 00:05:58.006 "method": "iobuf_set_options", 00:05:58.006 "params": { 00:05:58.006 "small_pool_count": 8192, 00:05:58.006 "large_pool_count": 1024, 00:05:58.006 "small_bufsize": 8192, 00:05:58.006 "large_bufsize": 135168 00:05:58.006 } 00:05:58.006 } 00:05:58.006 ] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "sock", 00:05:58.006 "config": [ 00:05:58.006 { 00:05:58.006 "method": "sock_set_default_impl", 00:05:58.006 "params": { 00:05:58.006 "impl_name": "posix" 00:05:58.006 } 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "method": "sock_impl_set_options", 00:05:58.006 "params": { 00:05:58.006 "impl_name": "ssl", 00:05:58.006 "recv_buf_size": 4096, 00:05:58.006 "send_buf_size": 4096, 00:05:58.006 "enable_recv_pipe": true, 00:05:58.006 "enable_quickack": false, 00:05:58.006 "enable_placement_id": 0, 00:05:58.006 "enable_zerocopy_send_server": true, 00:05:58.006 "enable_zerocopy_send_client": false, 00:05:58.006 "zerocopy_threshold": 0, 00:05:58.006 "tls_version": 0, 00:05:58.006 "enable_ktls": false 00:05:58.006 } 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "method": "sock_impl_set_options", 00:05:58.006 "params": { 00:05:58.006 "impl_name": "posix", 00:05:58.006 "recv_buf_size": 2097152, 00:05:58.006 "send_buf_size": 2097152, 00:05:58.006 "enable_recv_pipe": true, 00:05:58.006 "enable_quickack": false, 00:05:58.006 "enable_placement_id": 0, 00:05:58.006 "enable_zerocopy_send_server": true, 00:05:58.006 "enable_zerocopy_send_client": false, 00:05:58.006 "zerocopy_threshold": 0, 00:05:58.006 "tls_version": 0, 00:05:58.006 "enable_ktls": false 00:05:58.006 } 00:05:58.006 } 00:05:58.006 ] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "vmd", 00:05:58.006 "config": [] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "accel", 00:05:58.006 "config": [ 00:05:58.006 { 00:05:58.006 "method": "accel_set_options", 00:05:58.006 "params": { 00:05:58.006 "small_cache_size": 128, 00:05:58.006 "large_cache_size": 16, 00:05:58.006 "task_count": 2048, 00:05:58.006 "sequence_count": 2048, 00:05:58.006 "buf_count": 2048 00:05:58.006 } 00:05:58.006 } 00:05:58.006 ] 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "subsystem": "bdev", 00:05:58.006 "config": [ 00:05:58.006 { 00:05:58.006 "method": "bdev_set_options", 00:05:58.006 "params": { 00:05:58.006 "bdev_io_pool_size": 65535, 00:05:58.006 "bdev_io_cache_size": 256, 00:05:58.006 "bdev_auto_examine": true, 00:05:58.006 "iobuf_small_cache_size": 128, 00:05:58.006 "iobuf_large_cache_size": 16 00:05:58.006 } 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "method": "bdev_raid_set_options", 00:05:58.006 "params": { 00:05:58.006 "process_window_size_kb": 1024, 00:05:58.006 "process_max_bandwidth_mb_sec": 0 00:05:58.006 } 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "method": "bdev_iscsi_set_options", 00:05:58.006 "params": { 00:05:58.006 "timeout_sec": 30 00:05:58.006 } 00:05:58.006 }, 00:05:58.006 { 00:05:58.006 "method": "bdev_nvme_set_options", 00:05:58.006 "params": { 00:05:58.006 "action_on_timeout": "none", 00:05:58.006 "timeout_us": 0, 00:05:58.006 "timeout_admin_us": 0, 00:05:58.006 "keep_alive_timeout_ms": 10000, 00:05:58.006 "arbitration_burst": 0, 00:05:58.006 "low_priority_weight": 0, 00:05:58.006 "medium_priority_weight": 0, 00:05:58.006 "high_priority_weight": 0, 00:05:58.006 "nvme_adminq_poll_period_us": 10000, 00:05:58.006 "nvme_ioq_poll_period_us": 0, 00:05:58.006 "io_queue_requests": 0, 00:05:58.006 "delay_cmd_submit": true, 00:05:58.006 "transport_retry_count": 4, 00:05:58.007 "bdev_retry_count": 3, 00:05:58.007 "transport_ack_timeout": 0, 00:05:58.007 "ctrlr_loss_timeout_sec": 0, 00:05:58.007 "reconnect_delay_sec": 0, 00:05:58.007 "fast_io_fail_timeout_sec": 0, 00:05:58.007 "disable_auto_failback": false, 00:05:58.007 "generate_uuids": false, 00:05:58.007 "transport_tos": 0, 00:05:58.007 "nvme_error_stat": false, 00:05:58.007 "rdma_srq_size": 0, 00:05:58.007 "io_path_stat": false, 00:05:58.007 "allow_accel_sequence": false, 00:05:58.007 "rdma_max_cq_size": 0, 00:05:58.007 "rdma_cm_event_timeout_ms": 0, 00:05:58.007 "dhchap_digests": [ 00:05:58.007 "sha256", 00:05:58.007 "sha384", 00:05:58.007 "sha512" 00:05:58.007 ], 00:05:58.007 "dhchap_dhgroups": [ 00:05:58.007 "null", 00:05:58.007 "ffdhe2048", 00:05:58.007 "ffdhe3072", 00:05:58.007 "ffdhe4096", 00:05:58.007 "ffdhe6144", 00:05:58.007 "ffdhe8192" 00:05:58.007 ] 00:05:58.007 } 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "method": "bdev_nvme_set_hotplug", 00:05:58.007 "params": { 00:05:58.007 "period_us": 100000, 00:05:58.007 "enable": false 00:05:58.007 } 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "method": "bdev_wait_for_examine" 00:05:58.007 } 00:05:58.007 ] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "scsi", 00:05:58.007 "config": null 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "scheduler", 00:05:58.007 "config": [ 00:05:58.007 { 00:05:58.007 "method": "framework_set_scheduler", 00:05:58.007 "params": { 00:05:58.007 "name": "static" 00:05:58.007 } 00:05:58.007 } 00:05:58.007 ] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "vhost_scsi", 00:05:58.007 "config": [] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "vhost_blk", 00:05:58.007 "config": [] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "ublk", 00:05:58.007 "config": [] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "nbd", 00:05:58.007 "config": [] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "nvmf", 00:05:58.007 "config": [ 00:05:58.007 { 00:05:58.007 "method": "nvmf_set_config", 00:05:58.007 "params": { 00:05:58.007 "discovery_filter": "match_any", 00:05:58.007 "admin_cmd_passthru": { 00:05:58.007 "identify_ctrlr": false 00:05:58.007 }, 00:05:58.007 "dhchap_digests": [ 00:05:58.007 "sha256", 00:05:58.007 "sha384", 00:05:58.007 "sha512" 00:05:58.007 ], 00:05:58.007 "dhchap_dhgroups": [ 00:05:58.007 "null", 00:05:58.007 "ffdhe2048", 00:05:58.007 "ffdhe3072", 00:05:58.007 "ffdhe4096", 00:05:58.007 "ffdhe6144", 00:05:58.007 "ffdhe8192" 00:05:58.007 ] 00:05:58.007 } 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "method": "nvmf_set_max_subsystems", 00:05:58.007 "params": { 00:05:58.007 "max_subsystems": 1024 00:05:58.007 } 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "method": "nvmf_set_crdt", 00:05:58.007 "params": { 00:05:58.007 "crdt1": 0, 00:05:58.007 "crdt2": 0, 00:05:58.007 "crdt3": 0 00:05:58.007 } 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "method": "nvmf_create_transport", 00:05:58.007 "params": { 00:05:58.007 "trtype": "TCP", 00:05:58.007 "max_queue_depth": 128, 00:05:58.007 "max_io_qpairs_per_ctrlr": 127, 00:05:58.007 "in_capsule_data_size": 4096, 00:05:58.007 "max_io_size": 131072, 00:05:58.007 "io_unit_size": 131072, 00:05:58.007 "max_aq_depth": 128, 00:05:58.007 "num_shared_buffers": 511, 00:05:58.007 "buf_cache_size": 4294967295, 00:05:58.007 "dif_insert_or_strip": false, 00:05:58.007 "zcopy": false, 00:05:58.007 "c2h_success": true, 00:05:58.007 "sock_priority": 0, 00:05:58.007 "abort_timeout_sec": 1, 00:05:58.007 "ack_timeout": 0, 00:05:58.007 "data_wr_pool_size": 0 00:05:58.007 } 00:05:58.007 } 00:05:58.007 ] 00:05:58.007 }, 00:05:58.007 { 00:05:58.007 "subsystem": "iscsi", 00:05:58.007 "config": [ 00:05:58.007 { 00:05:58.007 "method": "iscsi_set_options", 00:05:58.007 "params": { 00:05:58.007 "node_base": "iqn.2016-06.io.spdk", 00:05:58.007 "max_sessions": 128, 00:05:58.007 "max_connections_per_session": 2, 00:05:58.007 "max_queue_depth": 64, 00:05:58.007 "default_time2wait": 2, 00:05:58.007 "default_time2retain": 20, 00:05:58.007 "first_burst_length": 8192, 00:05:58.007 "immediate_data": true, 00:05:58.007 "allow_duplicated_isid": false, 00:05:58.007 "error_recovery_level": 0, 00:05:58.007 "nop_timeout": 60, 00:05:58.007 "nop_in_interval": 30, 00:05:58.007 "disable_chap": false, 00:05:58.007 "require_chap": false, 00:05:58.007 "mutual_chap": false, 00:05:58.007 "chap_group": 0, 00:05:58.007 "max_large_datain_per_connection": 64, 00:05:58.007 "max_r2t_per_connection": 4, 00:05:58.007 "pdu_pool_size": 36864, 00:05:58.007 "immediate_data_pool_size": 16384, 00:05:58.007 "data_out_pool_size": 2048 00:05:58.007 } 00:05:58.007 } 00:05:58.007 ] 00:05:58.007 } 00:05:58.007 ] 00:05:58.007 } 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71073 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71073 ']' 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71073 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71073 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:58.007 killing process with pid 71073 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71073' 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71073 00:05:58.007 18:16:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71073 00:05:58.291 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71096 00:05:58.291 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:58.291 18:16:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71096 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71096 ']' 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71096 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71096 00:06:03.587 killing process with pid 71096 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71096' 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71096 00:06:03.587 18:16:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71096 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:03.587 00:06:03.587 real 0m6.625s 00:06:03.587 user 0m6.329s 00:06:03.587 sys 0m0.539s 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.587 ************************************ 00:06:03.587 END TEST skip_rpc_with_json 00:06:03.587 ************************************ 00:06:03.587 18:16:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:03.587 18:16:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.587 18:16:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.587 18:16:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.587 ************************************ 00:06:03.587 START TEST skip_rpc_with_delay 00:06:03.587 ************************************ 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:03.587 [2024-10-08 18:16:52.329077] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:03.587 [2024-10-08 18:16:52.329194] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:03.587 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:03.587 00:06:03.587 real 0m0.121s 00:06:03.587 user 0m0.061s 00:06:03.588 sys 0m0.058s 00:06:03.588 ************************************ 00:06:03.588 END TEST skip_rpc_with_delay 00:06:03.588 ************************************ 00:06:03.588 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.588 18:16:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:03.588 18:16:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:03.588 18:16:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:03.588 18:16:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:03.588 18:16:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.588 18:16:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.588 18:16:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.588 ************************************ 00:06:03.588 START TEST exit_on_failed_rpc_init 00:06:03.588 ************************************ 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71208 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71208 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71208 ']' 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.588 18:16:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:03.846 [2024-10-08 18:16:52.492575] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:03.846 [2024-10-08 18:16:52.492687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71208 ] 00:06:03.846 [2024-10-08 18:16:52.621060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.846 [2024-10-08 18:16:52.639725] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.846 [2024-10-08 18:16:52.668839] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:04.786 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:04.786 [2024-10-08 18:16:53.409272] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:04.786 [2024-10-08 18:16:53.409388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71226 ] 00:06:04.786 [2024-10-08 18:16:53.538002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.786 [2024-10-08 18:16:53.557021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.786 [2024-10-08 18:16:53.588332] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.786 [2024-10-08 18:16:53.588403] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:04.786 [2024-10-08 18:16:53.588418] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:04.786 [2024-10-08 18:16:53.588430] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71208 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71208 ']' 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71208 00:06:05.044 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71208 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.045 killing process with pid 71208 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71208' 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71208 00:06:05.045 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71208 00:06:05.327 00:06:05.327 real 0m1.516s 00:06:05.327 user 0m1.673s 00:06:05.327 sys 0m0.391s 00:06:05.327 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.327 18:16:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:05.327 ************************************ 00:06:05.327 END TEST exit_on_failed_rpc_init 00:06:05.327 ************************************ 00:06:05.327 18:16:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:05.327 ************************************ 00:06:05.327 END TEST skip_rpc 00:06:05.327 ************************************ 00:06:05.327 00:06:05.327 real 0m13.849s 00:06:05.327 user 0m13.156s 00:06:05.327 sys 0m1.386s 00:06:05.327 18:16:53 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.327 18:16:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.327 18:16:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.327 18:16:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.327 18:16:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.327 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.327 ************************************ 00:06:05.327 START TEST rpc_client 00:06:05.327 ************************************ 00:06:05.327 18:16:54 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:05.327 * Looking for test storage... 00:06:05.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:05.327 18:16:54 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.327 18:16:54 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.327 18:16:54 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.327 18:16:54 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.327 18:16:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.598 18:16:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.599 18:16:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:05.599 OK 00:06:05.599 18:16:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:05.599 00:06:05.599 real 0m0.190s 00:06:05.599 user 0m0.110s 00:06:05.599 sys 0m0.090s 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.599 18:16:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:05.599 ************************************ 00:06:05.599 END TEST rpc_client 00:06:05.599 ************************************ 00:06:05.599 18:16:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.599 18:16:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.599 18:16:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.599 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.599 ************************************ 00:06:05.599 START TEST json_config 00:06:05.599 ************************************ 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.599 18:16:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.599 18:16:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.599 18:16:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.599 18:16:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.599 18:16:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.599 18:16:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:05.599 18:16:54 json_config -- scripts/common.sh@345 -- # : 1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.599 18:16:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.599 18:16:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@353 -- # local d=1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.599 18:16:54 json_config -- scripts/common.sh@355 -- # echo 1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.599 18:16:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@353 -- # local d=2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.599 18:16:54 json_config -- scripts/common.sh@355 -- # echo 2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.599 18:16:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.599 18:16:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.599 18:16:54 json_config -- scripts/common.sh@368 -- # return 0 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.599 --rc genhtml_branch_coverage=1 00:06:05.599 --rc genhtml_function_coverage=1 00:06:05.599 --rc genhtml_legend=1 00:06:05.599 --rc geninfo_all_blocks=1 00:06:05.599 --rc geninfo_unexecuted_blocks=1 00:06:05.599 00:06:05.599 ' 00:06:05.599 18:16:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea069fc8-fcc2-43f4-911f-3c99098acd58 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ea069fc8-fcc2-43f4-911f-3c99098acd58 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.599 18:16:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.599 18:16:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.600 18:16:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.600 18:16:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.600 18:16:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.600 18:16:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.600 18:16:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.600 18:16:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.600 18:16:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:05.600 18:16:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@51 -- # : 0 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.600 18:16:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:05.600 WARNING: No tests are enabled so not running JSON configuration tests 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:05.600 18:16:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:05.600 00:06:05.600 real 0m0.143s 00:06:05.600 user 0m0.095s 00:06:05.600 sys 0m0.052s 00:06:05.600 18:16:54 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:05.600 18:16:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.600 ************************************ 00:06:05.600 END TEST json_config 00:06:05.600 ************************************ 00:06:05.600 18:16:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.600 18:16:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:05.600 18:16:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:05.600 18:16:54 -- common/autotest_common.sh@10 -- # set +x 00:06:05.600 ************************************ 00:06:05.600 START TEST json_config_extra_key 00:06:05.600 ************************************ 00:06:05.600 18:16:54 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:05.859 18:16:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:05.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.859 --rc genhtml_branch_coverage=1 00:06:05.859 --rc genhtml_function_coverage=1 00:06:05.859 --rc genhtml_legend=1 00:06:05.859 --rc geninfo_all_blocks=1 00:06:05.859 --rc geninfo_unexecuted_blocks=1 00:06:05.859 00:06:05.859 ' 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:05.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.859 --rc genhtml_branch_coverage=1 00:06:05.859 --rc genhtml_function_coverage=1 00:06:05.859 --rc genhtml_legend=1 00:06:05.859 --rc geninfo_all_blocks=1 00:06:05.859 --rc geninfo_unexecuted_blocks=1 00:06:05.859 00:06:05.859 ' 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:05.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.859 --rc genhtml_branch_coverage=1 00:06:05.859 --rc genhtml_function_coverage=1 00:06:05.859 --rc genhtml_legend=1 00:06:05.859 --rc geninfo_all_blocks=1 00:06:05.859 --rc geninfo_unexecuted_blocks=1 00:06:05.859 00:06:05.859 ' 00:06:05.859 18:16:54 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:05.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:05.859 --rc genhtml_branch_coverage=1 00:06:05.859 --rc genhtml_function_coverage=1 00:06:05.859 --rc genhtml_legend=1 00:06:05.859 --rc geninfo_all_blocks=1 00:06:05.859 --rc geninfo_unexecuted_blocks=1 00:06:05.859 00:06:05.859 ' 00:06:05.859 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.859 18:16:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ea069fc8-fcc2-43f4-911f-3c99098acd58 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ea069fc8-fcc2-43f4-911f-3c99098acd58 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:05.860 18:16:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:05.860 18:16:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.860 18:16:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.860 18:16:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.860 18:16:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.860 18:16:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.860 18:16:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.860 18:16:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:05.860 18:16:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:05.860 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:05.860 18:16:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:05.860 INFO: launching applications... 00:06:05.860 18:16:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71403 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:05.860 Waiting for target to run... 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71403 /var/tmp/spdk_tgt.sock 00:06:05.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71403 ']' 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:05.860 18:16:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.860 18:16:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:05.860 [2024-10-08 18:16:54.644067] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:05.860 [2024-10-08 18:16:54.644180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:06:06.118 [2024-10-08 18:16:54.931578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:06.118 [2024-10-08 18:16:54.952800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.376 [2024-10-08 18:16:54.968918] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.635 00:06:06.635 INFO: shutting down applications... 00:06:06.635 18:16:55 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:06.635 18:16:55 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:06.635 18:16:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:06.635 18:16:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71403 ]] 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71403 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71403 00:06:06.635 18:16:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71403 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.203 18:16:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.203 SPDK target shutdown done 00:06:07.203 18:16:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:07.203 Success 00:06:07.203 00:06:07.203 real 0m1.556s 00:06:07.203 user 0m1.232s 00:06:07.203 sys 0m0.349s 00:06:07.203 18:16:55 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.203 ************************************ 00:06:07.203 END TEST json_config_extra_key 00:06:07.203 ************************************ 00:06:07.203 18:16:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.203 18:16:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.203 18:16:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.203 18:16:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.203 18:16:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.203 ************************************ 00:06:07.203 START TEST alias_rpc 00:06:07.203 ************************************ 00:06:07.203 18:16:56 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.462 * Looking for test storage... 00:06:07.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:07.462 18:16:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.462 --rc genhtml_branch_coverage=1 00:06:07.462 --rc genhtml_function_coverage=1 00:06:07.462 --rc genhtml_legend=1 00:06:07.462 --rc geninfo_all_blocks=1 00:06:07.462 --rc geninfo_unexecuted_blocks=1 00:06:07.462 00:06:07.462 ' 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.462 --rc genhtml_branch_coverage=1 00:06:07.462 --rc genhtml_function_coverage=1 00:06:07.462 --rc genhtml_legend=1 00:06:07.462 --rc geninfo_all_blocks=1 00:06:07.462 --rc geninfo_unexecuted_blocks=1 00:06:07.462 00:06:07.462 ' 00:06:07.462 18:16:56 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:07.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.462 --rc genhtml_branch_coverage=1 00:06:07.462 --rc genhtml_function_coverage=1 00:06:07.462 --rc genhtml_legend=1 00:06:07.462 --rc geninfo_all_blocks=1 00:06:07.462 --rc geninfo_unexecuted_blocks=1 00:06:07.462 00:06:07.462 ' 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:07.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:07.463 --rc genhtml_branch_coverage=1 00:06:07.463 --rc genhtml_function_coverage=1 00:06:07.463 --rc genhtml_legend=1 00:06:07.463 --rc geninfo_all_blocks=1 00:06:07.463 --rc geninfo_unexecuted_blocks=1 00:06:07.463 00:06:07.463 ' 00:06:07.463 18:16:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.463 18:16:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71476 00:06:07.463 18:16:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71476 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71476 ']' 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.463 18:16:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.463 18:16:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.463 [2024-10-08 18:16:56.227122] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:07.463 [2024-10-08 18:16:56.227223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71476 ] 00:06:07.721 [2024-10-08 18:16:56.350603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.721 [2024-10-08 18:16:56.371689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.721 [2024-10-08 18:16:56.414310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.288 18:16:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.288 18:16:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:08.288 18:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:08.546 18:16:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71476 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71476 ']' 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71476 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71476 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.546 killing process with pid 71476 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71476' 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@969 -- # kill 71476 00:06:08.546 18:16:57 alias_rpc -- common/autotest_common.sh@974 -- # wait 71476 00:06:08.806 00:06:08.806 real 0m1.573s 00:06:08.806 user 0m1.656s 00:06:08.806 sys 0m0.421s 00:06:08.806 18:16:57 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.806 18:16:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.806 ************************************ 00:06:08.806 END TEST alias_rpc 00:06:08.806 ************************************ 00:06:08.806 18:16:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:08.806 18:16:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:08.806 18:16:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.806 18:16:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.806 18:16:57 -- common/autotest_common.sh@10 -- # set +x 00:06:08.806 ************************************ 00:06:08.806 START TEST spdkcli_tcp 00:06:08.806 ************************************ 00:06:08.806 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:09.066 * Looking for test storage... 00:06:09.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.066 18:16:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:09.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.066 --rc genhtml_branch_coverage=1 00:06:09.066 --rc genhtml_function_coverage=1 00:06:09.066 --rc genhtml_legend=1 00:06:09.066 --rc geninfo_all_blocks=1 00:06:09.066 --rc geninfo_unexecuted_blocks=1 00:06:09.066 00:06:09.066 ' 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:09.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.066 --rc genhtml_branch_coverage=1 00:06:09.066 --rc genhtml_function_coverage=1 00:06:09.066 --rc genhtml_legend=1 00:06:09.066 --rc geninfo_all_blocks=1 00:06:09.066 --rc geninfo_unexecuted_blocks=1 00:06:09.066 00:06:09.066 ' 00:06:09.066 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.067 --rc genhtml_branch_coverage=1 00:06:09.067 --rc genhtml_function_coverage=1 00:06:09.067 --rc genhtml_legend=1 00:06:09.067 --rc geninfo_all_blocks=1 00:06:09.067 --rc geninfo_unexecuted_blocks=1 00:06:09.067 00:06:09.067 ' 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:09.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.067 --rc genhtml_branch_coverage=1 00:06:09.067 --rc genhtml_function_coverage=1 00:06:09.067 --rc genhtml_legend=1 00:06:09.067 --rc geninfo_all_blocks=1 00:06:09.067 --rc geninfo_unexecuted_blocks=1 00:06:09.067 00:06:09.067 ' 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71561 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71561 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71561 ']' 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:09.067 18:16:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:09.067 18:16:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.067 [2024-10-08 18:16:57.886609] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:09.067 [2024-10-08 18:16:57.886733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71561 ] 00:06:09.328 [2024-10-08 18:16:58.015655] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.328 [2024-10-08 18:16:58.032894] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.328 [2024-10-08 18:16:58.066946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.328 [2024-10-08 18:16:58.066995] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.897 18:16:58 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.897 18:16:58 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:09.897 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:09.897 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71577 00:06:09.897 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.158 [ 00:06:10.158 "bdev_malloc_delete", 00:06:10.158 "bdev_malloc_create", 00:06:10.158 "bdev_null_resize", 00:06:10.158 "bdev_null_delete", 00:06:10.158 "bdev_null_create", 00:06:10.158 "bdev_nvme_cuse_unregister", 00:06:10.158 "bdev_nvme_cuse_register", 00:06:10.158 "bdev_opal_new_user", 00:06:10.158 "bdev_opal_set_lock_state", 00:06:10.158 "bdev_opal_delete", 00:06:10.158 "bdev_opal_get_info", 00:06:10.158 "bdev_opal_create", 00:06:10.158 "bdev_nvme_opal_revert", 00:06:10.158 "bdev_nvme_opal_init", 00:06:10.158 "bdev_nvme_send_cmd", 00:06:10.158 "bdev_nvme_set_keys", 00:06:10.158 "bdev_nvme_get_path_iostat", 00:06:10.158 "bdev_nvme_get_mdns_discovery_info", 00:06:10.158 "bdev_nvme_stop_mdns_discovery", 00:06:10.158 "bdev_nvme_start_mdns_discovery", 00:06:10.158 "bdev_nvme_set_multipath_policy", 00:06:10.158 "bdev_nvme_set_preferred_path", 00:06:10.158 "bdev_nvme_get_io_paths", 00:06:10.158 "bdev_nvme_remove_error_injection", 00:06:10.158 "bdev_nvme_add_error_injection", 00:06:10.158 "bdev_nvme_get_discovery_info", 00:06:10.158 "bdev_nvme_stop_discovery", 00:06:10.158 "bdev_nvme_start_discovery", 00:06:10.158 "bdev_nvme_get_controller_health_info", 00:06:10.158 "bdev_nvme_disable_controller", 00:06:10.158 "bdev_nvme_enable_controller", 00:06:10.158 "bdev_nvme_reset_controller", 00:06:10.158 "bdev_nvme_get_transport_statistics", 00:06:10.158 "bdev_nvme_apply_firmware", 00:06:10.158 "bdev_nvme_detach_controller", 00:06:10.158 "bdev_nvme_get_controllers", 00:06:10.158 "bdev_nvme_attach_controller", 00:06:10.158 "bdev_nvme_set_hotplug", 00:06:10.158 "bdev_nvme_set_options", 00:06:10.158 "bdev_passthru_delete", 00:06:10.158 "bdev_passthru_create", 00:06:10.158 "bdev_lvol_set_parent_bdev", 00:06:10.158 "bdev_lvol_set_parent", 00:06:10.158 "bdev_lvol_check_shallow_copy", 00:06:10.158 "bdev_lvol_start_shallow_copy", 00:06:10.158 "bdev_lvol_grow_lvstore", 00:06:10.158 "bdev_lvol_get_lvols", 00:06:10.158 "bdev_lvol_get_lvstores", 00:06:10.158 "bdev_lvol_delete", 00:06:10.158 "bdev_lvol_set_read_only", 00:06:10.158 "bdev_lvol_resize", 00:06:10.158 "bdev_lvol_decouple_parent", 00:06:10.158 "bdev_lvol_inflate", 00:06:10.158 "bdev_lvol_rename", 00:06:10.158 "bdev_lvol_clone_bdev", 00:06:10.158 "bdev_lvol_clone", 00:06:10.158 "bdev_lvol_snapshot", 00:06:10.158 "bdev_lvol_create", 00:06:10.158 "bdev_lvol_delete_lvstore", 00:06:10.158 "bdev_lvol_rename_lvstore", 00:06:10.158 "bdev_lvol_create_lvstore", 00:06:10.158 "bdev_raid_set_options", 00:06:10.158 "bdev_raid_remove_base_bdev", 00:06:10.158 "bdev_raid_add_base_bdev", 00:06:10.158 "bdev_raid_delete", 00:06:10.158 "bdev_raid_create", 00:06:10.158 "bdev_raid_get_bdevs", 00:06:10.158 "bdev_error_inject_error", 00:06:10.158 "bdev_error_delete", 00:06:10.158 "bdev_error_create", 00:06:10.158 "bdev_split_delete", 00:06:10.158 "bdev_split_create", 00:06:10.158 "bdev_delay_delete", 00:06:10.158 "bdev_delay_create", 00:06:10.158 "bdev_delay_update_latency", 00:06:10.158 "bdev_zone_block_delete", 00:06:10.158 "bdev_zone_block_create", 00:06:10.158 "blobfs_create", 00:06:10.158 "blobfs_detect", 00:06:10.158 "blobfs_set_cache_size", 00:06:10.158 "bdev_xnvme_delete", 00:06:10.158 "bdev_xnvme_create", 00:06:10.158 "bdev_aio_delete", 00:06:10.158 "bdev_aio_rescan", 00:06:10.158 "bdev_aio_create", 00:06:10.158 "bdev_ftl_set_property", 00:06:10.158 "bdev_ftl_get_properties", 00:06:10.158 "bdev_ftl_get_stats", 00:06:10.158 "bdev_ftl_unmap", 00:06:10.158 "bdev_ftl_unload", 00:06:10.158 "bdev_ftl_delete", 00:06:10.158 "bdev_ftl_load", 00:06:10.158 "bdev_ftl_create", 00:06:10.158 "bdev_virtio_attach_controller", 00:06:10.158 "bdev_virtio_scsi_get_devices", 00:06:10.158 "bdev_virtio_detach_controller", 00:06:10.158 "bdev_virtio_blk_set_hotplug", 00:06:10.158 "bdev_iscsi_delete", 00:06:10.158 "bdev_iscsi_create", 00:06:10.158 "bdev_iscsi_set_options", 00:06:10.158 "accel_error_inject_error", 00:06:10.158 "ioat_scan_accel_module", 00:06:10.158 "dsa_scan_accel_module", 00:06:10.158 "iaa_scan_accel_module", 00:06:10.158 "keyring_file_remove_key", 00:06:10.158 "keyring_file_add_key", 00:06:10.158 "keyring_linux_set_options", 00:06:10.158 "fsdev_aio_delete", 00:06:10.158 "fsdev_aio_create", 00:06:10.158 "iscsi_get_histogram", 00:06:10.158 "iscsi_enable_histogram", 00:06:10.158 "iscsi_set_options", 00:06:10.158 "iscsi_get_auth_groups", 00:06:10.158 "iscsi_auth_group_remove_secret", 00:06:10.158 "iscsi_auth_group_add_secret", 00:06:10.158 "iscsi_delete_auth_group", 00:06:10.158 "iscsi_create_auth_group", 00:06:10.158 "iscsi_set_discovery_auth", 00:06:10.158 "iscsi_get_options", 00:06:10.158 "iscsi_target_node_request_logout", 00:06:10.158 "iscsi_target_node_set_redirect", 00:06:10.158 "iscsi_target_node_set_auth", 00:06:10.158 "iscsi_target_node_add_lun", 00:06:10.158 "iscsi_get_stats", 00:06:10.158 "iscsi_get_connections", 00:06:10.158 "iscsi_portal_group_set_auth", 00:06:10.158 "iscsi_start_portal_group", 00:06:10.158 "iscsi_delete_portal_group", 00:06:10.158 "iscsi_create_portal_group", 00:06:10.158 "iscsi_get_portal_groups", 00:06:10.158 "iscsi_delete_target_node", 00:06:10.158 "iscsi_target_node_remove_pg_ig_maps", 00:06:10.158 "iscsi_target_node_add_pg_ig_maps", 00:06:10.158 "iscsi_create_target_node", 00:06:10.158 "iscsi_get_target_nodes", 00:06:10.158 "iscsi_delete_initiator_group", 00:06:10.158 "iscsi_initiator_group_remove_initiators", 00:06:10.159 "iscsi_initiator_group_add_initiators", 00:06:10.159 "iscsi_create_initiator_group", 00:06:10.159 "iscsi_get_initiator_groups", 00:06:10.159 "nvmf_set_crdt", 00:06:10.159 "nvmf_set_config", 00:06:10.159 "nvmf_set_max_subsystems", 00:06:10.159 "nvmf_stop_mdns_prr", 00:06:10.159 "nvmf_publish_mdns_prr", 00:06:10.159 "nvmf_subsystem_get_listeners", 00:06:10.159 "nvmf_subsystem_get_qpairs", 00:06:10.159 "nvmf_subsystem_get_controllers", 00:06:10.159 "nvmf_get_stats", 00:06:10.159 "nvmf_get_transports", 00:06:10.159 "nvmf_create_transport", 00:06:10.159 "nvmf_get_targets", 00:06:10.159 "nvmf_delete_target", 00:06:10.159 "nvmf_create_target", 00:06:10.159 "nvmf_subsystem_allow_any_host", 00:06:10.159 "nvmf_subsystem_set_keys", 00:06:10.159 "nvmf_subsystem_remove_host", 00:06:10.159 "nvmf_subsystem_add_host", 00:06:10.159 "nvmf_ns_remove_host", 00:06:10.159 "nvmf_ns_add_host", 00:06:10.159 "nvmf_subsystem_remove_ns", 00:06:10.159 "nvmf_subsystem_set_ns_ana_group", 00:06:10.159 "nvmf_subsystem_add_ns", 00:06:10.159 "nvmf_subsystem_listener_set_ana_state", 00:06:10.159 "nvmf_discovery_get_referrals", 00:06:10.159 "nvmf_discovery_remove_referral", 00:06:10.159 "nvmf_discovery_add_referral", 00:06:10.159 "nvmf_subsystem_remove_listener", 00:06:10.159 "nvmf_subsystem_add_listener", 00:06:10.159 "nvmf_delete_subsystem", 00:06:10.159 "nvmf_create_subsystem", 00:06:10.159 "nvmf_get_subsystems", 00:06:10.159 "env_dpdk_get_mem_stats", 00:06:10.159 "nbd_get_disks", 00:06:10.159 "nbd_stop_disk", 00:06:10.159 "nbd_start_disk", 00:06:10.159 "ublk_recover_disk", 00:06:10.159 "ublk_get_disks", 00:06:10.159 "ublk_stop_disk", 00:06:10.159 "ublk_start_disk", 00:06:10.159 "ublk_destroy_target", 00:06:10.159 "ublk_create_target", 00:06:10.159 "virtio_blk_create_transport", 00:06:10.159 "virtio_blk_get_transports", 00:06:10.159 "vhost_controller_set_coalescing", 00:06:10.159 "vhost_get_controllers", 00:06:10.159 "vhost_delete_controller", 00:06:10.159 "vhost_create_blk_controller", 00:06:10.159 "vhost_scsi_controller_remove_target", 00:06:10.159 "vhost_scsi_controller_add_target", 00:06:10.159 "vhost_start_scsi_controller", 00:06:10.159 "vhost_create_scsi_controller", 00:06:10.159 "thread_set_cpumask", 00:06:10.159 "scheduler_set_options", 00:06:10.159 "framework_get_governor", 00:06:10.159 "framework_get_scheduler", 00:06:10.159 "framework_set_scheduler", 00:06:10.159 "framework_get_reactors", 00:06:10.159 "thread_get_io_channels", 00:06:10.159 "thread_get_pollers", 00:06:10.159 "thread_get_stats", 00:06:10.159 "framework_monitor_context_switch", 00:06:10.159 "spdk_kill_instance", 00:06:10.159 "log_enable_timestamps", 00:06:10.159 "log_get_flags", 00:06:10.159 "log_clear_flag", 00:06:10.159 "log_set_flag", 00:06:10.159 "log_get_level", 00:06:10.159 "log_set_level", 00:06:10.159 "log_get_print_level", 00:06:10.159 "log_set_print_level", 00:06:10.159 "framework_enable_cpumask_locks", 00:06:10.159 "framework_disable_cpumask_locks", 00:06:10.159 "framework_wait_init", 00:06:10.159 "framework_start_init", 00:06:10.159 "scsi_get_devices", 00:06:10.159 "bdev_get_histogram", 00:06:10.159 "bdev_enable_histogram", 00:06:10.159 "bdev_set_qos_limit", 00:06:10.159 "bdev_set_qd_sampling_period", 00:06:10.159 "bdev_get_bdevs", 00:06:10.159 "bdev_reset_iostat", 00:06:10.159 "bdev_get_iostat", 00:06:10.159 "bdev_examine", 00:06:10.159 "bdev_wait_for_examine", 00:06:10.159 "bdev_set_options", 00:06:10.159 "accel_get_stats", 00:06:10.159 "accel_set_options", 00:06:10.159 "accel_set_driver", 00:06:10.159 "accel_crypto_key_destroy", 00:06:10.159 "accel_crypto_keys_get", 00:06:10.159 "accel_crypto_key_create", 00:06:10.159 "accel_assign_opc", 00:06:10.159 "accel_get_module_info", 00:06:10.159 "accel_get_opc_assignments", 00:06:10.159 "vmd_rescan", 00:06:10.159 "vmd_remove_device", 00:06:10.159 "vmd_enable", 00:06:10.159 "sock_get_default_impl", 00:06:10.159 "sock_set_default_impl", 00:06:10.159 "sock_impl_set_options", 00:06:10.159 "sock_impl_get_options", 00:06:10.159 "iobuf_get_stats", 00:06:10.159 "iobuf_set_options", 00:06:10.159 "keyring_get_keys", 00:06:10.159 "framework_get_pci_devices", 00:06:10.159 "framework_get_config", 00:06:10.159 "framework_get_subsystems", 00:06:10.159 "fsdev_set_opts", 00:06:10.159 "fsdev_get_opts", 00:06:10.159 "trace_get_info", 00:06:10.159 "trace_get_tpoint_group_mask", 00:06:10.159 "trace_disable_tpoint_group", 00:06:10.159 "trace_enable_tpoint_group", 00:06:10.159 "trace_clear_tpoint_mask", 00:06:10.159 "trace_set_tpoint_mask", 00:06:10.159 "notify_get_notifications", 00:06:10.159 "notify_get_types", 00:06:10.159 "spdk_get_version", 00:06:10.159 "rpc_get_methods" 00:06:10.159 ] 00:06:10.159 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.159 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.159 18:16:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71561 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71561 ']' 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71561 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71561 00:06:10.159 killing process with pid 71561 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71561' 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71561 00:06:10.159 18:16:58 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71561 00:06:10.420 ************************************ 00:06:10.420 END TEST spdkcli_tcp 00:06:10.420 ************************************ 00:06:10.420 00:06:10.420 real 0m1.609s 00:06:10.420 user 0m2.813s 00:06:10.420 sys 0m0.419s 00:06:10.420 18:16:59 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.420 18:16:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.679 18:16:59 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.679 18:16:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.679 18:16:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.679 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:06:10.679 ************************************ 00:06:10.679 START TEST dpdk_mem_utility 00:06:10.679 ************************************ 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:10.679 * Looking for test storage... 00:06:10.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.679 18:16:59 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.679 --rc genhtml_branch_coverage=1 00:06:10.679 --rc genhtml_function_coverage=1 00:06:10.679 --rc genhtml_legend=1 00:06:10.679 --rc geninfo_all_blocks=1 00:06:10.679 --rc geninfo_unexecuted_blocks=1 00:06:10.679 00:06:10.679 ' 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.679 --rc genhtml_branch_coverage=1 00:06:10.679 --rc genhtml_function_coverage=1 00:06:10.679 --rc genhtml_legend=1 00:06:10.679 --rc geninfo_all_blocks=1 00:06:10.679 --rc geninfo_unexecuted_blocks=1 00:06:10.679 00:06:10.679 ' 00:06:10.679 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.679 --rc genhtml_branch_coverage=1 00:06:10.679 --rc genhtml_function_coverage=1 00:06:10.679 --rc genhtml_legend=1 00:06:10.679 --rc geninfo_all_blocks=1 00:06:10.680 --rc geninfo_unexecuted_blocks=1 00:06:10.680 00:06:10.680 ' 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.680 --rc genhtml_branch_coverage=1 00:06:10.680 --rc genhtml_function_coverage=1 00:06:10.680 --rc genhtml_legend=1 00:06:10.680 --rc geninfo_all_blocks=1 00:06:10.680 --rc geninfo_unexecuted_blocks=1 00:06:10.680 00:06:10.680 ' 00:06:10.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.680 18:16:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:10.680 18:16:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71656 00:06:10.680 18:16:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71656 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71656 ']' 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.680 18:16:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:10.680 18:16:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:10.680 [2024-10-08 18:16:59.526041] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:10.680 [2024-10-08 18:16:59.526165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71656 ] 00:06:10.938 [2024-10-08 18:16:59.654649] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.938 [2024-10-08 18:16:59.675378] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.938 [2024-10-08 18:16:59.706706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.544 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.544 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:11.544 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:11.544 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:11.544 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:11.544 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.544 { 00:06:11.544 "filename": "/tmp/spdk_mem_dump.txt" 00:06:11.544 } 00:06:11.544 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:11.544 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:11.806 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:11.806 1 heaps totaling size 860.000000 MiB 00:06:11.806 size: 860.000000 MiB heap id: 0 00:06:11.806 end heaps---------- 00:06:11.806 9 mempools totaling size 642.649841 MiB 00:06:11.806 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:11.806 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:11.806 size: 92.545471 MiB name: bdev_io_71656 00:06:11.806 size: 51.011292 MiB name: evtpool_71656 00:06:11.806 size: 50.003479 MiB name: msgpool_71656 00:06:11.806 size: 36.509338 MiB name: fsdev_io_71656 00:06:11.806 size: 21.763794 MiB name: PDU_Pool 00:06:11.806 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:11.806 size: 0.026123 MiB name: Session_Pool 00:06:11.806 end mempools------- 00:06:11.806 6 memzones totaling size 4.142822 MiB 00:06:11.806 size: 1.000366 MiB name: RG_ring_0_71656 00:06:11.806 size: 1.000366 MiB name: RG_ring_1_71656 00:06:11.806 size: 1.000366 MiB name: RG_ring_4_71656 00:06:11.806 size: 1.000366 MiB name: RG_ring_5_71656 00:06:11.806 size: 0.125366 MiB name: RG_ring_2_71656 00:06:11.806 size: 0.015991 MiB name: RG_ring_3_71656 00:06:11.806 end memzones------- 00:06:11.806 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:11.806 heap id: 0 total size: 860.000000 MiB number of busy elements: 308 number of free elements: 16 00:06:11.806 list of free elements. size: 13.811340 MiB 00:06:11.807 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:11.807 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:11.807 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:11.807 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:11.807 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:11.807 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:11.807 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:11.807 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:11.807 element at address: 0x200000200000 with size: 0.709839 MiB 00:06:11.807 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:11.807 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:11.807 element at address: 0x200003e00000 with size: 0.487549 MiB 00:06:11.807 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:11.807 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:11.807 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:11.807 element at address: 0x200003a00000 with size: 0.353210 MiB 00:06:11.807 list of standard malloc elements. size: 199.391968 MiB 00:06:11.807 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:11.807 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:11.807 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:11.807 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:11.807 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:11.807 element at address: 0x2000003b9f00 with size: 0.265747 MiB 00:06:11.807 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:11.807 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:11.807 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:11.807 element at address: 0x2000002b5b80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b5c40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b5d00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b5dc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b5e80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b5f40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6000 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b60c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6180 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6240 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6300 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b63c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6480 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6540 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6600 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b66c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b68c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6980 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6a40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6b00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6bc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6c80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6d40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6e00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6ec0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b6f80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7040 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7100 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b71c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7280 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7340 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7400 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b74c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7580 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7640 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7700 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b77c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7880 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7940 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7a00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7ac0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7b80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000002b7c40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000003b9e40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a5a6c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a5eb80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003aff940 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003eff000 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:11.807 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:11.807 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:11.807 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:11.808 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:11.809 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:11.809 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:11.809 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:11.809 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:11.809 list of memzone associated elements. size: 646.796692 MiB 00:06:11.809 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:11.809 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:11.809 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:11.809 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:11.809 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:11.809 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71656_0 00:06:11.809 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:11.809 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71656_0 00:06:11.809 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:11.809 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71656_0 00:06:11.809 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:11.809 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71656_0 00:06:11.809 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:11.809 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:11.809 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:11.809 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:11.809 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:11.809 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71656 00:06:11.809 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:11.809 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71656 00:06:11.809 element at address: 0x2000002b7d00 with size: 1.008118 MiB 00:06:11.809 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71656 00:06:11.809 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:11.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:11.809 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:11.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:11.809 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:11.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:11.809 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:11.809 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:11.809 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:11.809 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71656 00:06:11.809 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:11.809 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71656 00:06:11.809 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:11.809 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71656 00:06:11.809 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:11.809 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71656 00:06:11.809 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:06:11.809 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71656 00:06:11.809 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:06:11.809 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71656 00:06:11.809 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:11.809 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:11.809 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:11.809 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:11.809 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:11.809 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:11.809 element at address: 0x200003a5ec40 with size: 0.125488 MiB 00:06:11.809 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71656 00:06:11.809 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:11.809 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:11.809 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:11.809 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:11.809 element at address: 0x200003a5a980 with size: 0.016113 MiB 00:06:11.809 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71656 00:06:11.809 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:11.809 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:11.809 element at address: 0x2000002b6780 with size: 0.000305 MiB 00:06:11.809 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71656 00:06:11.809 element at address: 0x200003affa00 with size: 0.000305 MiB 00:06:11.809 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71656 00:06:11.809 element at address: 0x200003a5a780 with size: 0.000305 MiB 00:06:11.809 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71656 00:06:11.809 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:11.809 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:11.809 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:11.809 18:17:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71656 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71656 ']' 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71656 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71656 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71656' 00:06:11.809 killing process with pid 71656 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71656 00:06:11.809 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71656 00:06:12.068 00:06:12.068 real 0m1.439s 00:06:12.068 user 0m1.469s 00:06:12.068 sys 0m0.371s 00:06:12.068 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.068 ************************************ 00:06:12.068 END TEST dpdk_mem_utility 00:06:12.068 ************************************ 00:06:12.068 18:17:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.068 18:17:00 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.068 18:17:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.068 18:17:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.068 18:17:00 -- common/autotest_common.sh@10 -- # set +x 00:06:12.068 ************************************ 00:06:12.068 START TEST event 00:06:12.068 ************************************ 00:06:12.068 18:17:00 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:12.068 * Looking for test storage... 00:06:12.068 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:12.068 18:17:00 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.068 18:17:00 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.068 18:17:00 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.342 18:17:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.342 18:17:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.342 18:17:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.342 18:17:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.342 18:17:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.342 18:17:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.342 18:17:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.342 18:17:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.342 18:17:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.342 18:17:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.342 18:17:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.342 18:17:00 event -- scripts/common.sh@344 -- # case "$op" in 00:06:12.342 18:17:00 event -- scripts/common.sh@345 -- # : 1 00:06:12.342 18:17:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.342 18:17:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.342 18:17:00 event -- scripts/common.sh@365 -- # decimal 1 00:06:12.342 18:17:00 event -- scripts/common.sh@353 -- # local d=1 00:06:12.342 18:17:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.342 18:17:00 event -- scripts/common.sh@355 -- # echo 1 00:06:12.342 18:17:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.342 18:17:00 event -- scripts/common.sh@366 -- # decimal 2 00:06:12.342 18:17:00 event -- scripts/common.sh@353 -- # local d=2 00:06:12.342 18:17:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.342 18:17:00 event -- scripts/common.sh@355 -- # echo 2 00:06:12.342 18:17:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.342 18:17:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.342 18:17:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.342 18:17:00 event -- scripts/common.sh@368 -- # return 0 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.342 --rc genhtml_branch_coverage=1 00:06:12.342 --rc genhtml_function_coverage=1 00:06:12.342 --rc genhtml_legend=1 00:06:12.342 --rc geninfo_all_blocks=1 00:06:12.342 --rc geninfo_unexecuted_blocks=1 00:06:12.342 00:06:12.342 ' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.342 --rc genhtml_branch_coverage=1 00:06:12.342 --rc genhtml_function_coverage=1 00:06:12.342 --rc genhtml_legend=1 00:06:12.342 --rc geninfo_all_blocks=1 00:06:12.342 --rc geninfo_unexecuted_blocks=1 00:06:12.342 00:06:12.342 ' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.342 --rc genhtml_branch_coverage=1 00:06:12.342 --rc genhtml_function_coverage=1 00:06:12.342 --rc genhtml_legend=1 00:06:12.342 --rc geninfo_all_blocks=1 00:06:12.342 --rc geninfo_unexecuted_blocks=1 00:06:12.342 00:06:12.342 ' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.342 --rc genhtml_branch_coverage=1 00:06:12.342 --rc genhtml_function_coverage=1 00:06:12.342 --rc genhtml_legend=1 00:06:12.342 --rc geninfo_all_blocks=1 00:06:12.342 --rc geninfo_unexecuted_blocks=1 00:06:12.342 00:06:12.342 ' 00:06:12.342 18:17:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:12.342 18:17:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.342 18:17:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:12.342 18:17:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.342 18:17:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.342 ************************************ 00:06:12.342 START TEST event_perf 00:06:12.342 ************************************ 00:06:12.342 18:17:00 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.342 Running I/O for 1 seconds...[2024-10-08 18:17:00.977531] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:12.342 [2024-10-08 18:17:00.977646] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71736 ] 00:06:12.342 [2024-10-08 18:17:01.107386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.342 [2024-10-08 18:17:01.128104] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.342 [2024-10-08 18:17:01.163685] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.342 [2024-10-08 18:17:01.163979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.342 [2024-10-08 18:17:01.164225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.342 [2024-10-08 18:17:01.164304] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.741 Running I/O for 1 seconds... 00:06:13.741 lcore 0: 149913 00:06:13.741 lcore 1: 149912 00:06:13.741 lcore 2: 149909 00:06:13.741 lcore 3: 149911 00:06:13.741 done. 00:06:13.741 00:06:13.741 real 0m1.275s 00:06:13.741 user 0m4.080s 00:06:13.741 sys 0m0.077s 00:06:13.741 18:17:02 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.741 ************************************ 00:06:13.741 END TEST event_perf 00:06:13.741 ************************************ 00:06:13.741 18:17:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.741 18:17:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:13.741 18:17:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:13.741 18:17:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.741 18:17:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.741 ************************************ 00:06:13.741 START TEST event_reactor 00:06:13.741 ************************************ 00:06:13.741 18:17:02 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:13.741 [2024-10-08 18:17:02.301774] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:13.741 [2024-10-08 18:17:02.301891] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71770 ] 00:06:13.741 [2024-10-08 18:17:02.430916] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:13.741 [2024-10-08 18:17:02.447873] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.741 [2024-10-08 18:17:02.481593] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.127 test_start 00:06:15.127 oneshot 00:06:15.127 tick 100 00:06:15.127 tick 100 00:06:15.127 tick 250 00:06:15.127 tick 100 00:06:15.127 tick 100 00:06:15.128 tick 100 00:06:15.128 tick 250 00:06:15.128 tick 500 00:06:15.128 tick 100 00:06:15.128 tick 100 00:06:15.128 tick 250 00:06:15.128 tick 100 00:06:15.128 tick 100 00:06:15.128 test_end 00:06:15.128 ************************************ 00:06:15.128 END TEST event_reactor 00:06:15.128 ************************************ 00:06:15.128 00:06:15.128 real 0m1.268s 00:06:15.128 user 0m1.091s 00:06:15.128 sys 0m0.069s 00:06:15.128 18:17:03 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.128 18:17:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.128 18:17:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.128 18:17:03 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:15.128 18:17:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.128 18:17:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.128 ************************************ 00:06:15.128 START TEST event_reactor_perf 00:06:15.128 ************************************ 00:06:15.128 18:17:03 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.128 [2024-10-08 18:17:03.619481] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:15.128 [2024-10-08 18:17:03.619592] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71807 ] 00:06:15.128 [2024-10-08 18:17:03.746238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.128 [2024-10-08 18:17:03.758849] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.128 [2024-10-08 18:17:03.792003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.073 test_start 00:06:16.073 test_end 00:06:16.073 Performance: 314151 events per second 00:06:16.073 00:06:16.073 real 0m1.264s 00:06:16.073 user 0m1.086s 00:06:16.073 sys 0m0.070s 00:06:16.073 18:17:04 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.073 18:17:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.073 ************************************ 00:06:16.073 END TEST event_reactor_perf 00:06:16.073 ************************************ 00:06:16.073 18:17:04 event -- event/event.sh@49 -- # uname -s 00:06:16.073 18:17:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.073 18:17:04 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.073 18:17:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.073 18:17:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.073 18:17:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.336 ************************************ 00:06:16.336 START TEST event_scheduler 00:06:16.336 ************************************ 00:06:16.336 18:17:04 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:16.336 * Looking for test storage... 00:06:16.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:16.336 18:17:04 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:16.336 18:17:04 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:16.336 18:17:04 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.336 18:17:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:16.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.336 --rc genhtml_branch_coverage=1 00:06:16.336 --rc genhtml_function_coverage=1 00:06:16.336 --rc genhtml_legend=1 00:06:16.336 --rc geninfo_all_blocks=1 00:06:16.336 --rc geninfo_unexecuted_blocks=1 00:06:16.336 00:06:16.336 ' 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:16.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.336 --rc genhtml_branch_coverage=1 00:06:16.336 --rc genhtml_function_coverage=1 00:06:16.336 --rc genhtml_legend=1 00:06:16.336 --rc geninfo_all_blocks=1 00:06:16.336 --rc geninfo_unexecuted_blocks=1 00:06:16.336 00:06:16.336 ' 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:16.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.336 --rc genhtml_branch_coverage=1 00:06:16.336 --rc genhtml_function_coverage=1 00:06:16.336 --rc genhtml_legend=1 00:06:16.336 --rc geninfo_all_blocks=1 00:06:16.336 --rc geninfo_unexecuted_blocks=1 00:06:16.336 00:06:16.336 ' 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:16.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.336 --rc genhtml_branch_coverage=1 00:06:16.336 --rc genhtml_function_coverage=1 00:06:16.336 --rc genhtml_legend=1 00:06:16.336 --rc geninfo_all_blocks=1 00:06:16.336 --rc geninfo_unexecuted_blocks=1 00:06:16.336 00:06:16.336 ' 00:06:16.336 18:17:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.336 18:17:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71877 00:06:16.336 18:17:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.336 18:17:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71877 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71877 ']' 00:06:16.336 18:17:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.336 18:17:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.336 [2024-10-08 18:17:05.145500] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:16.336 [2024-10-08 18:17:05.145641] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71877 ] 00:06:16.598 [2024-10-08 18:17:05.281117] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.598 [2024-10-08 18:17:05.299675] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.598 [2024-10-08 18:17:05.376478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.598 [2024-10-08 18:17:05.376813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.598 [2024-10-08 18:17:05.377033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.598 [2024-10-08 18:17:05.377046] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:17.169 18:17:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.169 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.169 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.169 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.169 POWER: Cannot set governor of lcore 0 to performance 00:06:17.169 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.169 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.169 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:17.169 POWER: Cannot set governor of lcore 0 to userspace 00:06:17.169 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:17.169 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:17.169 POWER: Unable to set Power Management Environment for lcore 0 00:06:17.169 [2024-10-08 18:17:06.007371] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:17.169 [2024-10-08 18:17:06.007394] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:17.169 [2024-10-08 18:17:06.007406] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.169 [2024-10-08 18:17:06.007422] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.169 [2024-10-08 18:17:06.007447] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.169 [2024-10-08 18:17:06.007454] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.169 18:17:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.169 18:17:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 [2024-10-08 18:17:06.118303] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:17.430 18:17:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:17.430 18:17:06 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.430 18:17:06 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 ************************************ 00:06:17.430 START TEST scheduler_create_thread 00:06:17.430 ************************************ 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 2 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 3 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 4 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 5 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 6 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 7 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 8 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 9 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 10 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:17.430 18:17:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.374 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.374 18:17:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.374 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.374 18:17:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.759 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.759 18:17:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.759 18:17:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.759 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.759 18:17:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.701 ************************************ 00:06:20.701 END TEST scheduler_create_thread 00:06:20.701 ************************************ 00:06:20.701 18:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.701 00:06:20.701 real 0m3.374s 00:06:20.701 user 0m0.017s 00:06:20.701 sys 0m0.005s 00:06:20.701 18:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.701 18:17:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.958 18:17:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.958 18:17:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71877 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71877 ']' 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71877 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71877 00:06:20.958 killing process with pid 71877 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71877' 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71877 00:06:20.958 18:17:09 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71877 00:06:21.217 [2024-10-08 18:17:09.891232] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:21.477 00:06:21.477 real 0m5.208s 00:06:21.477 user 0m10.160s 00:06:21.477 sys 0m0.435s 00:06:21.477 ************************************ 00:06:21.477 END TEST event_scheduler 00:06:21.477 ************************************ 00:06:21.477 18:17:10 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.477 18:17:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 18:17:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:21.477 18:17:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:21.477 18:17:10 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.477 18:17:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.477 18:17:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 ************************************ 00:06:21.477 START TEST app_repeat 00:06:21.477 ************************************ 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:21.477 Process app_repeat pid: 71983 00:06:21.477 spdk_app_start Round 0 00:06:21.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71983 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71983' 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:21.477 18:17:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71983 /var/tmp/spdk-nbd.sock 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71983 ']' 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.477 18:17:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.477 [2024-10-08 18:17:10.226075] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:21.477 [2024-10-08 18:17:10.226182] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71983 ] 00:06:21.736 [2024-10-08 18:17:10.354723] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.736 [2024-10-08 18:17:10.375667] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.736 [2024-10-08 18:17:10.409653] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.736 [2024-10-08 18:17:10.409694] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.300 18:17:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.300 18:17:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:22.301 18:17:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.558 Malloc0 00:06:22.558 18:17:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.815 Malloc1 00:06:22.815 18:17:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.815 18:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.072 /dev/nbd0 00:06:23.072 18:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.072 18:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.072 1+0 records in 00:06:23.072 1+0 records out 00:06:23.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186678 s, 21.9 MB/s 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.072 18:17:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:23.072 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.072 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.072 18:17:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:23.329 /dev/nbd1 00:06:23.329 18:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.330 1+0 records in 00:06:23.330 1+0 records out 00:06:23.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202859 s, 20.2 MB/s 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:23.330 18:17:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.330 18:17:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.624 { 00:06:23.624 "nbd_device": "/dev/nbd0", 00:06:23.624 "bdev_name": "Malloc0" 00:06:23.624 }, 00:06:23.624 { 00:06:23.624 "nbd_device": "/dev/nbd1", 00:06:23.624 "bdev_name": "Malloc1" 00:06:23.624 } 00:06:23.624 ]' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.624 { 00:06:23.624 "nbd_device": "/dev/nbd0", 00:06:23.624 "bdev_name": "Malloc0" 00:06:23.624 }, 00:06:23.624 { 00:06:23.624 "nbd_device": "/dev/nbd1", 00:06:23.624 "bdev_name": "Malloc1" 00:06:23.624 } 00:06:23.624 ]' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.624 /dev/nbd1' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.624 /dev/nbd1' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.624 256+0 records in 00:06:23.624 256+0 records out 00:06:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00679261 s, 154 MB/s 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.624 256+0 records in 00:06:23.624 256+0 records out 00:06:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211241 s, 49.6 MB/s 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.624 256+0 records in 00:06:23.624 256+0 records out 00:06:23.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225786 s, 46.4 MB/s 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.624 18:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.883 18:17:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.141 18:17:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.141 18:17:12 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.399 18:17:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.658 [2024-10-08 18:17:13.342701] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.658 [2024-10-08 18:17:13.381088] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.658 [2024-10-08 18:17:13.381185] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.658 [2024-10-08 18:17:13.423212] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.658 [2024-10-08 18:17:13.423276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.939 spdk_app_start Round 1 00:06:27.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.939 18:17:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.939 18:17:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:27.939 18:17:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71983 /var/tmp/spdk-nbd.sock 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71983 ']' 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.939 18:17:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:27.939 18:17:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.939 Malloc0 00:06:27.939 18:17:16 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.196 Malloc1 00:06:28.196 18:17:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.196 18:17:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.196 18:17:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.197 18:17:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.197 /dev/nbd0 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.455 1+0 records in 00:06:28.455 1+0 records out 00:06:28.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264404 s, 15.5 MB/s 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.455 /dev/nbd1 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.455 1+0 records in 00:06:28.455 1+0 records out 00:06:28.455 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161724 s, 25.3 MB/s 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.455 18:17:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.455 18:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.713 { 00:06:28.713 "nbd_device": "/dev/nbd0", 00:06:28.713 "bdev_name": "Malloc0" 00:06:28.713 }, 00:06:28.713 { 00:06:28.713 "nbd_device": "/dev/nbd1", 00:06:28.713 "bdev_name": "Malloc1" 00:06:28.713 } 00:06:28.713 ]' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.713 { 00:06:28.713 "nbd_device": "/dev/nbd0", 00:06:28.713 "bdev_name": "Malloc0" 00:06:28.713 }, 00:06:28.713 { 00:06:28.713 "nbd_device": "/dev/nbd1", 00:06:28.713 "bdev_name": "Malloc1" 00:06:28.713 } 00:06:28.713 ]' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.713 /dev/nbd1' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.713 /dev/nbd1' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.713 256+0 records in 00:06:28.713 256+0 records out 00:06:28.713 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00924088 s, 113 MB/s 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.713 18:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.970 256+0 records in 00:06:28.970 256+0 records out 00:06:28.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197793 s, 53.0 MB/s 00:06:28.970 18:17:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.970 18:17:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.971 256+0 records in 00:06:28.971 256+0 records out 00:06:28.971 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147302 s, 71.2 MB/s 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.971 18:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.229 18:17:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.229 18:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.487 18:17:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.487 18:17:18 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.743 18:17:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:30.001 [2024-10-08 18:17:18.618556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.001 [2024-10-08 18:17:18.654950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.001 [2024-10-08 18:17:18.655014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.001 [2024-10-08 18:17:18.694686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:30.001 [2024-10-08 18:17:18.694742] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.280 spdk_app_start Round 2 00:06:33.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.280 18:17:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:33.280 18:17:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:33.280 18:17:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71983 /var/tmp/spdk-nbd.sock 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71983 ']' 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:33.280 18:17:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:33.280 18:17:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.280 Malloc0 00:06:33.280 18:17:21 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:33.280 Malloc1 00:06:33.538 18:17:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.538 /dev/nbd0 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.538 18:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.539 1+0 records in 00:06:33.539 1+0 records out 00:06:33.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249244 s, 16.4 MB/s 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.539 18:17:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.539 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.539 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.539 18:17:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.797 /dev/nbd1 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.797 1+0 records in 00:06:33.797 1+0 records out 00:06:33.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002039 s, 20.1 MB/s 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.797 18:17:22 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.797 18:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:34.055 { 00:06:34.055 "nbd_device": "/dev/nbd0", 00:06:34.055 "bdev_name": "Malloc0" 00:06:34.055 }, 00:06:34.055 { 00:06:34.055 "nbd_device": "/dev/nbd1", 00:06:34.055 "bdev_name": "Malloc1" 00:06:34.055 } 00:06:34.055 ]' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:34.055 { 00:06:34.055 "nbd_device": "/dev/nbd0", 00:06:34.055 "bdev_name": "Malloc0" 00:06:34.055 }, 00:06:34.055 { 00:06:34.055 "nbd_device": "/dev/nbd1", 00:06:34.055 "bdev_name": "Malloc1" 00:06:34.055 } 00:06:34.055 ]' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:34.055 /dev/nbd1' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:34.055 /dev/nbd1' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:34.055 256+0 records in 00:06:34.055 256+0 records out 00:06:34.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00580119 s, 181 MB/s 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:34.055 256+0 records in 00:06:34.055 256+0 records out 00:06:34.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018824 s, 55.7 MB/s 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:34.055 18:17:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:34.055 256+0 records in 00:06:34.055 256+0 records out 00:06:34.055 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194431 s, 53.9 MB/s 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.313 18:17:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.313 18:17:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.572 18:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.830 18:17:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.830 18:17:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:35.089 18:17:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.089 [2024-10-08 18:17:23.897225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.089 [2024-10-08 18:17:23.926876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.089 [2024-10-08 18:17:23.927030] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.347 [2024-10-08 18:17:23.958038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.347 [2024-10-08 18:17:23.958098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.627 18:17:26 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71983 /var/tmp/spdk-nbd.sock 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71983 ']' 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.627 18:17:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.627 18:17:27 event.app_repeat -- event/event.sh@39 -- # killprocess 71983 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71983 ']' 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71983 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71983 00:06:38.627 killing process with pid 71983 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71983' 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71983 00:06:38.627 18:17:27 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71983 00:06:38.627 spdk_app_start is called in Round 0. 00:06:38.627 Shutdown signal received, stop current app iteration 00:06:38.627 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:38.627 spdk_app_start is called in Round 1. 00:06:38.627 Shutdown signal received, stop current app iteration 00:06:38.627 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:38.627 spdk_app_start is called in Round 2. 00:06:38.627 Shutdown signal received, stop current app iteration 00:06:38.627 Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 reinitialization... 00:06:38.627 spdk_app_start is called in Round 3. 00:06:38.628 Shutdown signal received, stop current app iteration 00:06:38.628 18:17:27 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:38.628 18:17:27 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:38.628 00:06:38.628 real 0m16.971s 00:06:38.628 user 0m37.796s 00:06:38.628 sys 0m2.132s 00:06:38.628 18:17:27 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.628 ************************************ 00:06:38.628 END TEST app_repeat 00:06:38.628 18:17:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.628 ************************************ 00:06:38.628 18:17:27 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:38.628 18:17:27 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.628 18:17:27 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.628 18:17:27 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.628 18:17:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.628 ************************************ 00:06:38.628 START TEST cpu_locks 00:06:38.628 ************************************ 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:38.628 * Looking for test storage... 00:06:38.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.628 18:17:27 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.628 --rc genhtml_branch_coverage=1 00:06:38.628 --rc genhtml_function_coverage=1 00:06:38.628 --rc genhtml_legend=1 00:06:38.628 --rc geninfo_all_blocks=1 00:06:38.628 --rc geninfo_unexecuted_blocks=1 00:06:38.628 00:06:38.628 ' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.628 --rc genhtml_branch_coverage=1 00:06:38.628 --rc genhtml_function_coverage=1 00:06:38.628 --rc genhtml_legend=1 00:06:38.628 --rc geninfo_all_blocks=1 00:06:38.628 --rc geninfo_unexecuted_blocks=1 00:06:38.628 00:06:38.628 ' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.628 --rc genhtml_branch_coverage=1 00:06:38.628 --rc genhtml_function_coverage=1 00:06:38.628 --rc genhtml_legend=1 00:06:38.628 --rc geninfo_all_blocks=1 00:06:38.628 --rc geninfo_unexecuted_blocks=1 00:06:38.628 00:06:38.628 ' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.628 --rc genhtml_branch_coverage=1 00:06:38.628 --rc genhtml_function_coverage=1 00:06:38.628 --rc genhtml_legend=1 00:06:38.628 --rc geninfo_all_blocks=1 00:06:38.628 --rc geninfo_unexecuted_blocks=1 00:06:38.628 00:06:38.628 ' 00:06:38.628 18:17:27 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:38.628 18:17:27 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:38.628 18:17:27 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:38.628 18:17:27 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.628 18:17:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.628 ************************************ 00:06:38.628 START TEST default_locks 00:06:38.628 ************************************ 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72403 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72403 00:06:38.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72403 ']' 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.628 18:17:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.628 [2024-10-08 18:17:27.428731] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:38.628 [2024-10-08 18:17:27.428854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72403 ] 00:06:38.887 [2024-10-08 18:17:27.557499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.887 [2024-10-08 18:17:27.577379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.887 [2024-10-08 18:17:27.609940] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.452 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.452 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:39.452 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72403 00:06:39.452 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72403 00:06:39.452 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72403 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72403 ']' 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72403 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72403 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:39.710 killing process with pid 72403 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72403' 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72403 00:06:39.710 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72403 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72403 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72403 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72403 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72403 ']' 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.968 ERROR: process (pid: 72403) is no longer running 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.968 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72403) - No such process 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.968 ************************************ 00:06:39.968 END TEST default_locks 00:06:39.968 ************************************ 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.968 00:06:39.968 real 0m1.419s 00:06:39.968 user 0m1.453s 00:06:39.968 sys 0m0.425s 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.968 18:17:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.968 18:17:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:39.968 18:17:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.968 18:17:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.968 18:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.228 ************************************ 00:06:40.228 START TEST default_locks_via_rpc 00:06:40.228 ************************************ 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72450 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72450 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72450 ']' 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.228 18:17:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.228 [2024-10-08 18:17:28.880548] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:40.228 [2024-10-08 18:17:28.880645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72450 ] 00:06:40.228 [2024-10-08 18:17:29.003379] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.228 [2024-10-08 18:17:29.023393] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.228 [2024-10-08 18:17:29.055807] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72450 ']' 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.167 killing process with pid 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72450' 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72450 00:06:41.167 18:17:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72450 00:06:41.427 00:06:41.427 real 0m1.426s 00:06:41.427 user 0m1.421s 00:06:41.427 sys 0m0.448s 00:06:41.427 18:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.427 ************************************ 00:06:41.427 END TEST default_locks_via_rpc 00:06:41.427 ************************************ 00:06:41.427 18:17:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.687 18:17:30 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:41.687 18:17:30 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.687 18:17:30 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.687 18:17:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.687 ************************************ 00:06:41.687 START TEST non_locking_app_on_locked_coremask 00:06:41.687 ************************************ 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72497 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72497 /var/tmp/spdk.sock 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72497 ']' 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.687 18:17:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.687 [2024-10-08 18:17:30.381510] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:41.687 [2024-10-08 18:17:30.381637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72497 ] 00:06:41.687 [2024-10-08 18:17:30.511060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.687 [2024-10-08 18:17:30.527929] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.948 [2024-10-08 18:17:30.560734] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72513 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72513 /var/tmp/spdk2.sock 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72513 ']' 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:42.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:42.520 18:17:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.520 [2024-10-08 18:17:31.294519] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:42.520 [2024-10-08 18:17:31.294676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72513 ] 00:06:42.780 [2024-10-08 18:17:31.429578] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.780 [2024-10-08 18:17:31.454593] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.780 [2024-10-08 18:17:31.454655] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.780 [2024-10-08 18:17:31.558098] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.360 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.360 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:43.360 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72497 00:06:43.360 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72497 00:06:43.360 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72497 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72497 ']' 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72497 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72497 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.930 killing process with pid 72497 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72497' 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72497 00:06:43.930 18:17:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72497 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72513 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72513 ']' 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72513 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72513 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.495 killing process with pid 72513 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72513' 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72513 00:06:44.495 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72513 00:06:44.753 00:06:44.753 real 0m3.182s 00:06:44.753 user 0m3.451s 00:06:44.753 sys 0m0.897s 00:06:44.753 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.753 ************************************ 00:06:44.753 END TEST non_locking_app_on_locked_coremask 00:06:44.753 ************************************ 00:06:44.753 18:17:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.753 18:17:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:44.753 18:17:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.753 18:17:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.753 18:17:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.753 ************************************ 00:06:44.753 START TEST locking_app_on_unlocked_coremask 00:06:44.753 ************************************ 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72571 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72571 /var/tmp/spdk.sock 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72571 ']' 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.753 18:17:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:45.014 [2024-10-08 18:17:33.630054] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:45.014 [2024-10-08 18:17:33.630197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72571 ] 00:06:45.014 [2024-10-08 18:17:33.763875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.014 [2024-10-08 18:17:33.782801] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.014 [2024-10-08 18:17:33.782848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.014 [2024-10-08 18:17:33.833066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72587 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72587 /var/tmp/spdk2.sock 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72587 ']' 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.581 18:17:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.842 [2024-10-08 18:17:34.493103] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:45.842 [2024-10-08 18:17:34.493235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72587 ] 00:06:45.842 [2024-10-08 18:17:34.625282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.842 [2024-10-08 18:17:34.649936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.103 [2024-10-08 18:17:34.715051] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.668 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.668 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.668 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72587 00:06:46.668 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72587 00:06:46.668 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72571 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72571 ']' 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72571 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72571 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.927 killing process with pid 72571 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72571' 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72571 00:06:46.927 18:17:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72571 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72587 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72587 ']' 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72587 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72587 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.493 killing process with pid 72587 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72587' 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72587 00:06:47.493 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72587 00:06:47.750 00:06:47.750 real 0m2.813s 00:06:47.750 user 0m3.012s 00:06:47.750 sys 0m0.819s 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 ************************************ 00:06:47.750 END TEST locking_app_on_unlocked_coremask 00:06:47.750 ************************************ 00:06:47.750 18:17:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:47.750 18:17:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.750 18:17:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.750 18:17:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.750 ************************************ 00:06:47.750 START TEST locking_app_on_locked_coremask 00:06:47.750 ************************************ 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72645 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72645 /var/tmp/spdk.sock 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72645 ']' 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.750 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.751 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.751 18:17:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.751 [2024-10-08 18:17:36.477857] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:47.751 [2024-10-08 18:17:36.477993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72645 ] 00:06:48.008 [2024-10-08 18:17:36.608943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.008 [2024-10-08 18:17:36.626994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.008 [2024-10-08 18:17:36.655856] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72661 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72661 /var/tmp/spdk2.sock 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72661 /var/tmp/spdk2.sock 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72661 /var/tmp/spdk2.sock 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72661 ']' 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.574 18:17:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.574 [2024-10-08 18:17:37.391810] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:48.574 [2024-10-08 18:17:37.392173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72661 ] 00:06:48.831 [2024-10-08 18:17:37.517221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.831 [2024-10-08 18:17:37.534729] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72645 has claimed it. 00:06:48.831 [2024-10-08 18:17:37.534783] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72661) - No such process 00:06:49.396 ERROR: process (pid: 72661) is no longer running 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72645 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72645 00:06:49.396 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72645 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72645 ']' 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72645 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72645 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.653 killing process with pid 72645 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72645' 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72645 00:06:49.653 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72645 00:06:49.910 00:06:49.910 real 0m2.135s 00:06:49.910 user 0m2.442s 00:06:49.910 sys 0m0.510s 00:06:49.910 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.910 18:17:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.910 ************************************ 00:06:49.910 END TEST locking_app_on_locked_coremask 00:06:49.910 ************************************ 00:06:49.910 18:17:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.910 18:17:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.910 18:17:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.910 18:17:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.910 ************************************ 00:06:49.910 START TEST locking_overlapped_coremask 00:06:49.910 ************************************ 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:49.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72703 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72703 /var/tmp/spdk.sock 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72703 ']' 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.910 18:17:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.911 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.911 18:17:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.911 [2024-10-08 18:17:38.650703] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:49.911 [2024-10-08 18:17:38.650824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72703 ] 00:06:50.168 [2024-10-08 18:17:38.779687] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.168 [2024-10-08 18:17:38.797731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.168 [2024-10-08 18:17:38.830908] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.168 [2024-10-08 18:17:38.831124] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.168 [2024-10-08 18:17:38.831208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72721 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72721 /var/tmp/spdk2.sock 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72721 /var/tmp/spdk2.sock 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:50.733 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72721 /var/tmp/spdk2.sock 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72721 ']' 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.734 18:17:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.734 [2024-10-08 18:17:39.511599] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:50.734 [2024-10-08 18:17:39.511716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72721 ] 00:06:51.001 [2024-10-08 18:17:39.643811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.001 [2024-10-08 18:17:39.667312] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72703 has claimed it. 00:06:51.001 [2024-10-08 18:17:39.667357] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.568 ERROR: process (pid: 72721) is no longer running 00:06:51.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72721) - No such process 00:06:51.568 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.568 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72703 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72703 ']' 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72703 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72703 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.569 killing process with pid 72703 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72703' 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72703 00:06:51.569 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72703 00:06:51.827 00:06:51.827 real 0m1.842s 00:06:51.827 user 0m5.036s 00:06:51.827 sys 0m0.388s 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.827 ************************************ 00:06:51.827 END TEST locking_overlapped_coremask 00:06:51.827 ************************************ 00:06:51.827 18:17:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:51.827 18:17:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.827 18:17:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.827 18:17:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.827 ************************************ 00:06:51.827 START TEST locking_overlapped_coremask_via_rpc 00:06:51.827 ************************************ 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72763 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72763 /var/tmp/spdk.sock 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72763 ']' 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:51.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.827 18:17:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.827 [2024-10-08 18:17:40.536122] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:51.827 [2024-10-08 18:17:40.536225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72763 ] 00:06:51.827 [2024-10-08 18:17:40.665125] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.086 [2024-10-08 18:17:40.681471] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.086 [2024-10-08 18:17:40.681516] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.086 [2024-10-08 18:17:40.714724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.086 [2024-10-08 18:17:40.715008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.086 [2024-10-08 18:17:40.715039] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.652 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.652 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:52.652 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:52.652 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72781 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72781 /var/tmp/spdk2.sock 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72781 ']' 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.653 18:17:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.653 [2024-10-08 18:17:41.445084] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:52.653 [2024-10-08 18:17:41.445202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72781 ] 00:06:52.910 [2024-10-08 18:17:41.578522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.910 [2024-10-08 18:17:41.608993] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.910 [2024-10-08 18:17:41.609035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.910 [2024-10-08 18:17:41.697287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.910 [2024-10-08 18:17:41.697427] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.910 [2024-10-08 18:17:41.697518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.478 [2024-10-08 18:17:42.292929] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72763 has claimed it. 00:06:53.478 request: 00:06:53.478 { 00:06:53.478 "method": "framework_enable_cpumask_locks", 00:06:53.478 "req_id": 1 00:06:53.478 } 00:06:53.478 Got JSON-RPC error response 00:06:53.478 response: 00:06:53.478 { 00:06:53.478 "code": -32603, 00:06:53.478 "message": "Failed to claim CPU core: 2" 00:06:53.478 } 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72763 /var/tmp/spdk.sock 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72763 ']' 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.478 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72781 /var/tmp/spdk2.sock 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72781 ']' 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.737 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.995 ************************************ 00:06:53.995 END TEST locking_overlapped_coremask_via_rpc 00:06:53.995 ************************************ 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.995 00:06:53.995 real 0m2.257s 00:06:53.995 user 0m1.068s 00:06:53.995 sys 0m0.117s 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.995 18:17:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.995 18:17:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.995 18:17:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72763 ]] 00:06:53.995 18:17:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72763 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72763 ']' 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72763 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72763 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72763' 00:06:53.995 killing process with pid 72763 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72763 00:06:53.995 18:17:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72763 00:06:54.253 18:17:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72781 ]] 00:06:54.253 18:17:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72781 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72781 ']' 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72781 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72781 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:54.253 killing process with pid 72781 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72781' 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72781 00:06:54.253 18:17:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72781 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72763 ]] 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72763 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72763 ']' 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72763 00:06:54.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72763) - No such process 00:06:54.819 Process with pid 72763 is not found 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72763 is not found' 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72781 ]] 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72781 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72781 ']' 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72781 00:06:54.819 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72781) - No such process 00:06:54.819 Process with pid 72781 is not found 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72781 is not found' 00:06:54.819 18:17:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:54.819 00:06:54.819 real 0m16.204s 00:06:54.819 user 0m28.200s 00:06:54.819 sys 0m4.388s 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.819 ************************************ 00:06:54.819 END TEST cpu_locks 00:06:54.819 ************************************ 00:06:54.819 18:17:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.819 ************************************ 00:06:54.819 END TEST event 00:06:54.819 ************************************ 00:06:54.819 00:06:54.819 real 0m42.650s 00:06:54.819 user 1m22.595s 00:06:54.819 sys 0m7.404s 00:06:54.819 18:17:43 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.819 18:17:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.819 18:17:43 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.819 18:17:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.819 18:17:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.819 18:17:43 -- common/autotest_common.sh@10 -- # set +x 00:06:54.819 ************************************ 00:06:54.819 START TEST thread 00:06:54.819 ************************************ 00:06:54.819 18:17:43 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:54.819 * Looking for test storage... 00:06:54.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:54.819 18:17:43 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:54.819 18:17:43 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:54.819 18:17:43 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:54.819 18:17:43 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:54.819 18:17:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.819 18:17:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.819 18:17:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.819 18:17:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.819 18:17:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.819 18:17:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.819 18:17:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.819 18:17:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.819 18:17:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.819 18:17:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.819 18:17:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.819 18:17:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:54.819 18:17:43 thread -- scripts/common.sh@345 -- # : 1 00:06:54.819 18:17:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.819 18:17:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.819 18:17:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:54.819 18:17:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:54.820 18:17:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.820 18:17:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:54.820 18:17:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.820 18:17:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:54.820 18:17:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:54.820 18:17:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.820 18:17:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:54.820 18:17:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.820 18:17:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.820 18:17:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.820 18:17:43 thread -- scripts/common.sh@368 -- # return 0 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.820 --rc genhtml_branch_coverage=1 00:06:54.820 --rc genhtml_function_coverage=1 00:06:54.820 --rc genhtml_legend=1 00:06:54.820 --rc geninfo_all_blocks=1 00:06:54.820 --rc geninfo_unexecuted_blocks=1 00:06:54.820 00:06:54.820 ' 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.820 --rc genhtml_branch_coverage=1 00:06:54.820 --rc genhtml_function_coverage=1 00:06:54.820 --rc genhtml_legend=1 00:06:54.820 --rc geninfo_all_blocks=1 00:06:54.820 --rc geninfo_unexecuted_blocks=1 00:06:54.820 00:06:54.820 ' 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.820 --rc genhtml_branch_coverage=1 00:06:54.820 --rc genhtml_function_coverage=1 00:06:54.820 --rc genhtml_legend=1 00:06:54.820 --rc geninfo_all_blocks=1 00:06:54.820 --rc geninfo_unexecuted_blocks=1 00:06:54.820 00:06:54.820 ' 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:54.820 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.820 --rc genhtml_branch_coverage=1 00:06:54.820 --rc genhtml_function_coverage=1 00:06:54.820 --rc genhtml_legend=1 00:06:54.820 --rc geninfo_all_blocks=1 00:06:54.820 --rc geninfo_unexecuted_blocks=1 00:06:54.820 00:06:54.820 ' 00:06:54.820 18:17:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.820 18:17:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.820 ************************************ 00:06:54.820 START TEST thread_poller_perf 00:06:54.820 ************************************ 00:06:54.820 18:17:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.078 [2024-10-08 18:17:43.675077] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:55.078 [2024-10-08 18:17:43.675185] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72908 ] 00:06:55.078 [2024-10-08 18:17:43.802643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.078 [2024-10-08 18:17:43.821376] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.078 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.078 [2024-10-08 18:17:43.850708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.451 [2024-10-08T18:17:45.301Z] ====================================== 00:06:56.451 [2024-10-08T18:17:45.301Z] busy:2607540372 (cyc) 00:06:56.451 [2024-10-08T18:17:45.301Z] total_run_count: 413000 00:06:56.451 [2024-10-08T18:17:45.301Z] tsc_hz: 2600000000 (cyc) 00:06:56.451 [2024-10-08T18:17:45.301Z] ====================================== 00:06:56.451 [2024-10-08T18:17:45.301Z] poller_cost: 6313 (cyc), 2428 (nsec) 00:06:56.451 00:06:56.451 real 0m1.253s 00:06:56.451 user 0m1.093s 00:06:56.451 sys 0m0.055s 00:06:56.451 18:17:44 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.451 18:17:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 ************************************ 00:06:56.451 END TEST thread_poller_perf 00:06:56.451 ************************************ 00:06:56.451 18:17:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.451 18:17:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:56.451 18:17:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.451 18:17:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.451 ************************************ 00:06:56.451 START TEST thread_poller_perf 00:06:56.451 ************************************ 00:06:56.451 18:17:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.451 [2024-10-08 18:17:44.975647] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:56.451 [2024-10-08 18:17:44.975766] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:06:56.451 [2024-10-08 18:17:45.103408] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.451 [2024-10-08 18:17:45.121069] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.451 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.451 [2024-10-08 18:17:45.152085] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.384 [2024-10-08T18:17:46.234Z] ====================================== 00:06:57.384 [2024-10-08T18:17:46.234Z] busy:2602632336 (cyc) 00:06:57.384 [2024-10-08T18:17:46.234Z] total_run_count: 5311000 00:06:57.384 [2024-10-08T18:17:46.234Z] tsc_hz: 2600000000 (cyc) 00:06:57.384 [2024-10-08T18:17:46.234Z] ====================================== 00:06:57.384 [2024-10-08T18:17:46.234Z] poller_cost: 490 (cyc), 188 (nsec) 00:06:57.384 00:06:57.384 real 0m1.260s 00:06:57.384 user 0m1.076s 00:06:57.384 sys 0m0.078s 00:06:57.384 18:17:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.384 ************************************ 00:06:57.384 END TEST thread_poller_perf 00:06:57.384 ************************************ 00:06:57.384 18:17:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 18:17:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:57.643 00:06:57.643 real 0m2.759s 00:06:57.643 user 0m2.282s 00:06:57.643 sys 0m0.260s 00:06:57.643 18:17:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.643 18:17:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 ************************************ 00:06:57.643 END TEST thread 00:06:57.643 ************************************ 00:06:57.643 18:17:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:57.643 18:17:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.643 18:17:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:57.643 18:17:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:57.643 18:17:46 -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 ************************************ 00:06:57.643 START TEST app_cmdline 00:06:57.643 ************************************ 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:57.643 * Looking for test storage... 00:06:57.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.643 18:17:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:57.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.643 --rc genhtml_branch_coverage=1 00:06:57.643 --rc genhtml_function_coverage=1 00:06:57.643 --rc genhtml_legend=1 00:06:57.643 --rc geninfo_all_blocks=1 00:06:57.643 --rc geninfo_unexecuted_blocks=1 00:06:57.643 00:06:57.643 ' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:57.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.643 --rc genhtml_branch_coverage=1 00:06:57.643 --rc genhtml_function_coverage=1 00:06:57.643 --rc genhtml_legend=1 00:06:57.643 --rc geninfo_all_blocks=1 00:06:57.643 --rc geninfo_unexecuted_blocks=1 00:06:57.643 00:06:57.643 ' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:57.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.643 --rc genhtml_branch_coverage=1 00:06:57.643 --rc genhtml_function_coverage=1 00:06:57.643 --rc genhtml_legend=1 00:06:57.643 --rc geninfo_all_blocks=1 00:06:57.643 --rc geninfo_unexecuted_blocks=1 00:06:57.643 00:06:57.643 ' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:57.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.643 --rc genhtml_branch_coverage=1 00:06:57.643 --rc genhtml_function_coverage=1 00:06:57.643 --rc genhtml_legend=1 00:06:57.643 --rc geninfo_all_blocks=1 00:06:57.643 --rc geninfo_unexecuted_blocks=1 00:06:57.643 00:06:57.643 ' 00:06:57.643 18:17:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:57.643 18:17:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73028 00:06:57.643 18:17:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73028 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 73028 ']' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.643 18:17:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.643 18:17:46 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:57.905 [2024-10-08 18:17:46.500718] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:57.905 [2024-10-08 18:17:46.500852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73028 ] 00:06:57.905 [2024-10-08 18:17:46.629202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.905 [2024-10-08 18:17:46.648372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.905 [2024-10-08 18:17:46.679988] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:58.843 { 00:06:58.843 "version": "SPDK v25.01-pre git sha1 92108e0a2", 00:06:58.843 "fields": { 00:06:58.843 "major": 25, 00:06:58.843 "minor": 1, 00:06:58.843 "patch": 0, 00:06:58.843 "suffix": "-pre", 00:06:58.843 "commit": "92108e0a2" 00:06:58.843 } 00:06:58.843 } 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:58.843 18:17:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:58.843 18:17:47 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:59.102 request: 00:06:59.102 { 00:06:59.102 "method": "env_dpdk_get_mem_stats", 00:06:59.102 "req_id": 1 00:06:59.102 } 00:06:59.102 Got JSON-RPC error response 00:06:59.102 response: 00:06:59.102 { 00:06:59.102 "code": -32601, 00:06:59.102 "message": "Method not found" 00:06:59.102 } 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.102 18:17:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73028 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 73028 ']' 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 73028 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73028 00:06:59.102 killing process with pid 73028 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73028' 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@969 -- # kill 73028 00:06:59.102 18:17:47 app_cmdline -- common/autotest_common.sh@974 -- # wait 73028 00:06:59.362 ************************************ 00:06:59.362 END TEST app_cmdline 00:06:59.362 ************************************ 00:06:59.362 00:06:59.362 real 0m1.802s 00:06:59.362 user 0m2.152s 00:06:59.362 sys 0m0.418s 00:06:59.362 18:17:48 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.362 18:17:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:59.362 18:17:48 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.362 18:17:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:59.362 18:17:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.362 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.362 ************************************ 00:06:59.362 START TEST version 00:06:59.362 ************************************ 00:06:59.362 18:17:48 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:59.624 * Looking for test storage... 00:06:59.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.624 18:17:48 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.624 18:17:48 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.624 18:17:48 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.624 18:17:48 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.624 18:17:48 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.624 18:17:48 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.624 18:17:48 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.624 18:17:48 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.624 18:17:48 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.624 18:17:48 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.624 18:17:48 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.624 18:17:48 version -- scripts/common.sh@344 -- # case "$op" in 00:06:59.624 18:17:48 version -- scripts/common.sh@345 -- # : 1 00:06:59.624 18:17:48 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.624 18:17:48 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.624 18:17:48 version -- scripts/common.sh@365 -- # decimal 1 00:06:59.624 18:17:48 version -- scripts/common.sh@353 -- # local d=1 00:06:59.624 18:17:48 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.624 18:17:48 version -- scripts/common.sh@355 -- # echo 1 00:06:59.624 18:17:48 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.624 18:17:48 version -- scripts/common.sh@366 -- # decimal 2 00:06:59.624 18:17:48 version -- scripts/common.sh@353 -- # local d=2 00:06:59.624 18:17:48 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.624 18:17:48 version -- scripts/common.sh@355 -- # echo 2 00:06:59.624 18:17:48 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.624 18:17:48 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.624 18:17:48 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.624 18:17:48 version -- scripts/common.sh@368 -- # return 0 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.624 --rc genhtml_branch_coverage=1 00:06:59.624 --rc genhtml_function_coverage=1 00:06:59.624 --rc genhtml_legend=1 00:06:59.624 --rc geninfo_all_blocks=1 00:06:59.624 --rc geninfo_unexecuted_blocks=1 00:06:59.624 00:06:59.624 ' 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.624 --rc genhtml_branch_coverage=1 00:06:59.624 --rc genhtml_function_coverage=1 00:06:59.624 --rc genhtml_legend=1 00:06:59.624 --rc geninfo_all_blocks=1 00:06:59.624 --rc geninfo_unexecuted_blocks=1 00:06:59.624 00:06:59.624 ' 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.624 --rc genhtml_branch_coverage=1 00:06:59.624 --rc genhtml_function_coverage=1 00:06:59.624 --rc genhtml_legend=1 00:06:59.624 --rc geninfo_all_blocks=1 00:06:59.624 --rc geninfo_unexecuted_blocks=1 00:06:59.624 00:06:59.624 ' 00:06:59.624 18:17:48 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.624 --rc genhtml_branch_coverage=1 00:06:59.624 --rc genhtml_function_coverage=1 00:06:59.624 --rc genhtml_legend=1 00:06:59.624 --rc geninfo_all_blocks=1 00:06:59.624 --rc geninfo_unexecuted_blocks=1 00:06:59.624 00:06:59.624 ' 00:06:59.624 18:17:48 version -- app/version.sh@17 -- # get_header_version major 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.624 18:17:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.624 18:17:48 version -- app/version.sh@17 -- # major=25 00:06:59.624 18:17:48 version -- app/version.sh@18 -- # get_header_version minor 00:06:59.624 18:17:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.624 18:17:48 version -- app/version.sh@18 -- # minor=1 00:06:59.624 18:17:48 version -- app/version.sh@19 -- # get_header_version patch 00:06:59.624 18:17:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.624 18:17:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.624 18:17:48 version -- app/version.sh@19 -- # patch=0 00:06:59.624 18:17:48 version -- app/version.sh@20 -- # get_header_version suffix 00:06:59.624 18:17:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:59.625 18:17:48 version -- app/version.sh@14 -- # tr -d '"' 00:06:59.625 18:17:48 version -- app/version.sh@14 -- # cut -f2 00:06:59.625 18:17:48 version -- app/version.sh@20 -- # suffix=-pre 00:06:59.625 18:17:48 version -- app/version.sh@22 -- # version=25.1 00:06:59.625 18:17:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:59.625 18:17:48 version -- app/version.sh@28 -- # version=25.1rc0 00:06:59.625 18:17:48 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:59.625 18:17:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:59.625 18:17:48 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:59.625 18:17:48 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:59.625 00:06:59.625 real 0m0.234s 00:06:59.625 user 0m0.134s 00:06:59.625 sys 0m0.106s 00:06:59.625 ************************************ 00:06:59.625 END TEST version 00:06:59.625 ************************************ 00:06:59.625 18:17:48 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.625 18:17:48 version -- common/autotest_common.sh@10 -- # set +x 00:06:59.625 18:17:48 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:59.625 18:17:48 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:59.625 18:17:48 -- spdk/autotest.sh@194 -- # uname -s 00:06:59.625 18:17:48 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:59.625 18:17:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.625 18:17:48 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:59.625 18:17:48 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:59.625 18:17:48 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:59.625 18:17:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:59.625 18:17:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.625 18:17:48 -- common/autotest_common.sh@10 -- # set +x 00:06:59.625 ************************************ 00:06:59.625 START TEST blockdev_nvme 00:06:59.625 ************************************ 00:06:59.625 18:17:48 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:59.886 * Looking for test storage... 00:06:59.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.886 18:17:48 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:59.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.886 --rc genhtml_branch_coverage=1 00:06:59.886 --rc genhtml_function_coverage=1 00:06:59.886 --rc genhtml_legend=1 00:06:59.886 --rc geninfo_all_blocks=1 00:06:59.886 --rc geninfo_unexecuted_blocks=1 00:06:59.886 00:06:59.886 ' 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:59.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.886 --rc genhtml_branch_coverage=1 00:06:59.886 --rc genhtml_function_coverage=1 00:06:59.886 --rc genhtml_legend=1 00:06:59.886 --rc geninfo_all_blocks=1 00:06:59.886 --rc geninfo_unexecuted_blocks=1 00:06:59.886 00:06:59.886 ' 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:59.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.886 --rc genhtml_branch_coverage=1 00:06:59.886 --rc genhtml_function_coverage=1 00:06:59.886 --rc genhtml_legend=1 00:06:59.886 --rc geninfo_all_blocks=1 00:06:59.886 --rc geninfo_unexecuted_blocks=1 00:06:59.886 00:06:59.886 ' 00:06:59.886 18:17:48 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:59.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.886 --rc genhtml_branch_coverage=1 00:06:59.886 --rc genhtml_function_coverage=1 00:06:59.886 --rc genhtml_legend=1 00:06:59.886 --rc geninfo_all_blocks=1 00:06:59.886 --rc geninfo_unexecuted_blocks=1 00:06:59.886 00:06:59.886 ' 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:59.886 18:17:48 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:59.886 18:17:48 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73189 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73189 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 73189 ']' 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.887 18:17:48 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:59.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.887 18:17:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.887 [2024-10-08 18:17:48.693358] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:06:59.887 [2024-10-08 18:17:48.693723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73189 ] 00:07:00.148 [2024-10-08 18:17:48.827470] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.148 [2024-10-08 18:17:48.877261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.148 [2024-10-08 18:17:48.959625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.721 18:17:49 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.721 18:17:49 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:00.721 18:17:49 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:00.721 18:17:49 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:00.721 18:17:49 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:00.721 18:17:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:00.721 18:17:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:00.982 18:17:49 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:00.982 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.982 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:01.245 18:17:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:01.245 18:17:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.245 18:17:50 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "289b7540-3f82-4bb9-bb07-88e86f14ca72"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "289b7540-3f82-4bb9-bb07-88e86f14ca72",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5c76bb5b-1790-41ff-9044-38078e598c29"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5c76bb5b-1790-41ff-9044-38078e598c29",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "e2b7bd89-8d0e-4171-8254-05f71d764a2f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2b7bd89-8d0e-4171-8254-05f71d764a2f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "59070c88-82b8-49e0-b7d8-845b031b3302"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59070c88-82b8-49e0-b7d8-845b031b3302",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9d99a549-8a44-4caf-a018-e6e440e36a71"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9d99a549-8a44-4caf-a018-e6e440e36a71",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f2851ca8-9a51-4d6a-bfdf-e2a89fe74fff"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f2851ca8-9a51-4d6a-bfdf-e2a89fe74fff",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:01.245 18:17:50 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73189 00:07:01.246 18:17:50 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 73189 ']' 00:07:01.246 18:17:50 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 73189 00:07:01.246 18:17:50 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:01.246 18:17:50 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:01.246 18:17:50 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73189 00:07:01.505 killing process with pid 73189 00:07:01.505 18:17:50 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:01.505 18:17:50 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:01.505 18:17:50 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73189' 00:07:01.505 18:17:50 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 73189 00:07:01.505 18:17:50 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 73189 00:07:01.764 18:17:50 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:01.764 18:17:50 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:01.764 18:17:50 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:01.764 18:17:50 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.764 18:17:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.764 ************************************ 00:07:01.764 START TEST bdev_hello_world 00:07:01.764 ************************************ 00:07:01.764 18:17:50 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:01.764 [2024-10-08 18:17:50.474835] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:01.764 [2024-10-08 18:17:50.474949] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73258 ] 00:07:01.764 [2024-10-08 18:17:50.602700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.023 [2024-10-08 18:17:50.616194] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.023 [2024-10-08 18:17:50.646923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.281 [2024-10-08 18:17:51.012950] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:02.281 [2024-10-08 18:17:51.012991] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:02.281 [2024-10-08 18:17:51.013017] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:02.281 [2024-10-08 18:17:51.015086] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:02.281 [2024-10-08 18:17:51.015855] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:02.281 [2024-10-08 18:17:51.016022] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:02.281 [2024-10-08 18:17:51.016707] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:02.281 00:07:02.281 [2024-10-08 18:17:51.016769] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:02.540 00:07:02.540 real 0m0.761s 00:07:02.540 user 0m0.496s 00:07:02.540 sys 0m0.163s 00:07:02.540 18:17:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.540 ************************************ 00:07:02.540 END TEST bdev_hello_world 00:07:02.540 ************************************ 00:07:02.540 18:17:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:02.540 18:17:51 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:02.540 18:17:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:02.540 18:17:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.540 18:17:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.540 ************************************ 00:07:02.540 START TEST bdev_bounds 00:07:02.540 ************************************ 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73282 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:02.540 Process bdevio pid: 73282 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73282' 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73282 00:07:02.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73282 ']' 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.540 18:17:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.540 [2024-10-08 18:17:51.294797] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:02.540 [2024-10-08 18:17:51.294915] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73282 ] 00:07:02.798 [2024-10-08 18:17:51.427592] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.798 [2024-10-08 18:17:51.446656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.798 [2024-10-08 18:17:51.481262] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.798 [2024-10-08 18:17:51.481476] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.798 [2024-10-08 18:17:51.481537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.364 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.364 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:03.364 18:17:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:03.622 I/O targets: 00:07:03.622 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:03.622 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:03.622 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.622 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.622 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:03.622 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:03.622 00:07:03.622 00:07:03.622 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.622 http://cunit.sourceforge.net/ 00:07:03.622 00:07:03.622 00:07:03.622 Suite: bdevio tests on: Nvme3n1 00:07:03.622 Test: blockdev write read block ...passed 00:07:03.622 Test: blockdev write zeroes read block ...passed 00:07:03.622 Test: blockdev write zeroes read no split ...passed 00:07:03.622 Test: blockdev write zeroes read split ...passed 00:07:03.622 Test: blockdev write zeroes read split partial ...passed 00:07:03.622 Test: blockdev reset ...[2024-10-08 18:17:52.249231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:03.622 [2024-10-08 18:17:52.251323] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.622 passed 00:07:03.622 Test: blockdev write read 8 blocks ...passed 00:07:03.622 Test: blockdev write read size > 128k ...passed 00:07:03.622 Test: blockdev write read invalid size ...passed 00:07:03.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.622 Test: blockdev write read max offset ...passed 00:07:03.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.622 Test: blockdev writev readv 8 blocks ...passed 00:07:03.622 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.622 Test: blockdev writev readv block ...passed 00:07:03.622 Test: blockdev writev readv size > 128k ...passed 00:07:03.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.622 Test: blockdev comparev and writev ...[2024-10-08 18:17:52.258407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ccc06000 len:0x1000 00:07:03.622 [2024-10-08 18:17:52.258478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.622 passed 00:07:03.622 Test: blockdev nvme passthru rw ...passed 00:07:03.622 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:17:52.259034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:03.622 Test: blockdev nvme admin passthru ...passed 00:07:03.622 Test: blockdev copy ...RP2 0x0 00:07:03.622 [2024-10-08 18:17:52.259195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.622 passed 00:07:03.622 Suite: bdevio tests on: Nvme2n3 00:07:03.622 Test: blockdev write read block ...passed 00:07:03.622 Test: blockdev write zeroes read block ...passed 00:07:03.623 Test: blockdev write zeroes read no split ...passed 00:07:03.623 Test: blockdev write zeroes read split ...passed 00:07:03.623 Test: blockdev write zeroes read split partial ...passed 00:07:03.623 Test: blockdev reset ...[2024-10-08 18:17:52.270735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:03.623 [2024-10-08 18:17:52.272708] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.623 passed 00:07:03.623 Test: blockdev write read 8 blocks ...passed 00:07:03.623 Test: blockdev write read size > 128k ...passed 00:07:03.623 Test: blockdev write read invalid size ...passed 00:07:03.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.623 Test: blockdev write read max offset ...passed 00:07:03.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.623 Test: blockdev writev readv 8 blocks ...passed 00:07:03.623 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.623 Test: blockdev writev readv block ...passed 00:07:03.623 Test: blockdev writev readv size > 128k ...passed 00:07:03.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.623 Test: blockdev comparev and writev ...[2024-10-08 18:17:52.277176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de405000 len:0x1000 00:07:03.623 [2024-10-08 18:17:52.277218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme passthru rw ...passed 00:07:03.623 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:17:52.277825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:03.623 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:03.623 [2024-10-08 18:17:52.277932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev copy ...passed 00:07:03.623 Suite: bdevio tests on: Nvme2n2 00:07:03.623 Test: blockdev write read block ...passed 00:07:03.623 Test: blockdev write zeroes read block ...passed 00:07:03.623 Test: blockdev write zeroes read no split ...passed 00:07:03.623 Test: blockdev write zeroes read split ...passed 00:07:03.623 Test: blockdev write zeroes read split partial ...passed 00:07:03.623 Test: blockdev reset ...[2024-10-08 18:17:52.292440] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:03.623 [2024-10-08 18:17:52.294374] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.623 passed 00:07:03.623 Test: blockdev write read 8 blocks ...passed 00:07:03.623 Test: blockdev write read size > 128k ...passed 00:07:03.623 Test: blockdev write read invalid size ...passed 00:07:03.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.623 Test: blockdev write read max offset ...passed 00:07:03.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.623 Test: blockdev writev readv 8 blocks ...passed 00:07:03.623 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.623 Test: blockdev writev readv block ...passed 00:07:03.623 Test: blockdev writev readv size > 128k ...passed 00:07:03.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.623 Test: blockdev comparev and writev ...[2024-10-08 18:17:52.299067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de836000 len:0x1000 00:07:03.623 [2024-10-08 18:17:52.299107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme passthru rw ...passed 00:07:03.623 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.623 Test: blockdev nvme admin passthru ...[2024-10-08 18:17:52.299704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.623 [2024-10-08 18:17:52.299733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev copy ...passed 00:07:03.623 Suite: bdevio tests on: Nvme2n1 00:07:03.623 Test: blockdev write read block ...passed 00:07:03.623 Test: blockdev write zeroes read block ...passed 00:07:03.623 Test: blockdev write zeroes read no split ...passed 00:07:03.623 Test: blockdev write zeroes read split ...passed 00:07:03.623 Test: blockdev write zeroes read split partial ...passed 00:07:03.623 Test: blockdev reset ...[2024-10-08 18:17:52.313917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:03.623 passed 00:07:03.623 Test: blockdev write read 8 blocks ...[2024-10-08 18:17:52.315787] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.623 passed 00:07:03.623 Test: blockdev write read size > 128k ...passed 00:07:03.623 Test: blockdev write read invalid size ...passed 00:07:03.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.623 Test: blockdev write read max offset ...passed 00:07:03.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.623 Test: blockdev writev readv 8 blocks ...passed 00:07:03.623 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.623 Test: blockdev writev readv block ...passed 00:07:03.623 Test: blockdev writev readv size > 128k ...passed 00:07:03.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.623 Test: blockdev comparev and writev ...[2024-10-08 18:17:52.320547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de830000 len:0x1000 00:07:03.623 [2024-10-08 18:17:52.320588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme passthru rw ...passed 00:07:03.623 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:17:52.321151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:03.623 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:03.623 [2024-10-08 18:17:52.321253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev copy ...passed 00:07:03.623 Suite: bdevio tests on: Nvme1n1 00:07:03.623 Test: blockdev write read block ...passed 00:07:03.623 Test: blockdev write zeroes read block ...passed 00:07:03.623 Test: blockdev write zeroes read no split ...passed 00:07:03.623 Test: blockdev write zeroes read split ...passed 00:07:03.623 Test: blockdev write zeroes read split partial ...passed 00:07:03.623 Test: blockdev reset ...[2024-10-08 18:17:52.335191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:03.623 [2024-10-08 18:17:52.336889] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.623 passed 00:07:03.623 Test: blockdev write read 8 blocks ...passed 00:07:03.623 Test: blockdev write read size > 128k ...passed 00:07:03.623 Test: blockdev write read invalid size ...passed 00:07:03.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.623 Test: blockdev write read max offset ...passed 00:07:03.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.623 Test: blockdev writev readv 8 blocks ...passed 00:07:03.623 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.623 Test: blockdev writev readv block ...passed 00:07:03.623 Test: blockdev writev readv size > 128k ...passed 00:07:03.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.623 Test: blockdev comparev and writev ...[2024-10-08 18:17:52.342136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2de82c000 len:0x1000 00:07:03.623 [2024-10-08 18:17:52.342176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme passthru rw ...passed 00:07:03.623 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:17:52.342898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:03.623 [2024-10-08 18:17:52.342925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme admin passthru ...passed 00:07:03.623 Test: blockdev copy ...passed 00:07:03.623 Suite: bdevio tests on: Nvme0n1 00:07:03.623 Test: blockdev write read block ...passed 00:07:03.623 Test: blockdev write zeroes read block ...passed 00:07:03.623 Test: blockdev write zeroes read no split ...passed 00:07:03.623 Test: blockdev write zeroes read split ...passed 00:07:03.623 Test: blockdev write zeroes read split partial ...passed 00:07:03.623 Test: blockdev reset ...[2024-10-08 18:17:52.358640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:03.623 [2024-10-08 18:17:52.360409] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:03.623 passed 00:07:03.623 Test: blockdev write read 8 blocks ...passed 00:07:03.623 Test: blockdev write read size > 128k ...passed 00:07:03.623 Test: blockdev write read invalid size ...passed 00:07:03.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:03.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:03.623 Test: blockdev write read max offset ...passed 00:07:03.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:03.623 Test: blockdev writev readv 8 blocks ...passed 00:07:03.623 Test: blockdev writev readv 30 x 1block ...passed 00:07:03.623 Test: blockdev writev readv block ...passed 00:07:03.623 Test: blockdev writev readv size > 128k ...passed 00:07:03.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:03.623 Test: blockdev comparev and writev ...passed 00:07:03.623 Test: blockdev nvme passthru rw ...[2024-10-08 18:17:52.364702] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:03.623 separate metadata which is not supported yet. 00:07:03.623 passed 00:07:03.623 Test: blockdev nvme passthru vendor specific ...passed 00:07:03.623 Test: blockdev nvme admin passthru ...[2024-10-08 18:17:52.365099] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:03.623 [2024-10-08 18:17:52.365137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:03.623 passed 00:07:03.623 Test: blockdev copy ...passed 00:07:03.623 00:07:03.623 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.623 suites 6 6 n/a 0 0 00:07:03.623 tests 138 138 138 0 0 00:07:03.623 asserts 893 893 893 0 n/a 00:07:03.623 00:07:03.623 Elapsed time = 0.322 seconds 00:07:03.623 0 00:07:03.623 18:17:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73282 00:07:03.623 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73282 ']' 00:07:03.623 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73282 00:07:03.623 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73282 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73282' 00:07:03.624 killing process with pid 73282 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73282 00:07:03.624 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73282 00:07:03.882 18:17:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:03.882 00:07:03.882 real 0m1.329s 00:07:03.882 user 0m3.360s 00:07:03.882 sys 0m0.260s 00:07:03.882 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.882 18:17:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.882 ************************************ 00:07:03.882 END TEST bdev_bounds 00:07:03.882 ************************************ 00:07:03.882 18:17:52 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.882 18:17:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:03.882 18:17:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.882 18:17:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.882 ************************************ 00:07:03.882 START TEST bdev_nbd 00:07:03.882 ************************************ 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:03.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73336 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73336 /var/tmp/spdk-nbd.sock 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73336 ']' 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.882 18:17:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:03.882 [2024-10-08 18:17:52.672536] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:03.882 [2024-10-08 18:17:52.672653] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.141 [2024-10-08 18:17:52.801256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:04.141 [2024-10-08 18:17:52.813783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.141 [2024-10-08 18:17:52.844625] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.705 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.963 1+0 records in 00:07:04.963 1+0 records out 00:07:04.963 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404749 s, 10.1 MB/s 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.963 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.223 1+0 records in 00:07:05.223 1+0 records out 00:07:05.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432714 s, 9.5 MB/s 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.223 18:17:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.223 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.223 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.223 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.223 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.223 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.481 1+0 records in 00:07:05.481 1+0 records out 00:07:05.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834716 s, 4.9 MB/s 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.481 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:05.738 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.739 1+0 records in 00:07:05.739 1+0 records out 00:07:05.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708073 s, 5.8 MB/s 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.739 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.997 1+0 records in 00:07:05.997 1+0 records out 00:07:05.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007758 s, 5.3 MB/s 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.997 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.255 1+0 records in 00:07:06.255 1+0 records out 00:07:06.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000636593 s, 6.4 MB/s 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:06.255 18:17:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.513 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:06.513 { 00:07:06.513 "nbd_device": "/dev/nbd0", 00:07:06.513 "bdev_name": "Nvme0n1" 00:07:06.513 }, 00:07:06.513 { 00:07:06.513 "nbd_device": "/dev/nbd1", 00:07:06.513 "bdev_name": "Nvme1n1" 00:07:06.513 }, 00:07:06.513 { 00:07:06.513 "nbd_device": "/dev/nbd2", 00:07:06.514 "bdev_name": "Nvme2n1" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd3", 00:07:06.514 "bdev_name": "Nvme2n2" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd4", 00:07:06.514 "bdev_name": "Nvme2n3" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd5", 00:07:06.514 "bdev_name": "Nvme3n1" 00:07:06.514 } 00:07:06.514 ]' 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd0", 00:07:06.514 "bdev_name": "Nvme0n1" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd1", 00:07:06.514 "bdev_name": "Nvme1n1" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd2", 00:07:06.514 "bdev_name": "Nvme2n1" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd3", 00:07:06.514 "bdev_name": "Nvme2n2" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd4", 00:07:06.514 "bdev_name": "Nvme2n3" 00:07:06.514 }, 00:07:06.514 { 00:07:06.514 "nbd_device": "/dev/nbd5", 00:07:06.514 "bdev_name": "Nvme3n1" 00:07:06.514 } 00:07:06.514 ]' 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.514 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.772 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.031 18:17:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.289 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.546 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.805 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.062 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:08.062 /dev/nbd0 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.320 1+0 records in 00:07:08.320 1+0 records out 00:07:08.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000305126 s, 13.4 MB/s 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.320 18:17:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:08.320 /dev/nbd1 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.320 1+0 records in 00:07:08.320 1+0 records out 00:07:08.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712824 s, 5.7 MB/s 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.320 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:08.595 /dev/nbd10 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.595 1+0 records in 00:07:08.595 1+0 records out 00:07:08.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102162 s, 4.0 MB/s 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.595 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:08.853 /dev/nbd11 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.853 1+0 records in 00:07:08.853 1+0 records out 00:07:08.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629225 s, 6.5 MB/s 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:08.853 /dev/nbd12 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.853 1+0 records in 00:07:08.853 1+0 records out 00:07:08.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642097 s, 6.4 MB/s 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.853 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:09.110 /dev/nbd13 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.110 1+0 records in 00:07:09.110 1+0 records out 00:07:09.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419312 s, 9.8 MB/s 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.110 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.111 18:17:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd0", 00:07:09.368 "bdev_name": "Nvme0n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd1", 00:07:09.368 "bdev_name": "Nvme1n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd10", 00:07:09.368 "bdev_name": "Nvme2n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd11", 00:07:09.368 "bdev_name": "Nvme2n2" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd12", 00:07:09.368 "bdev_name": "Nvme2n3" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd13", 00:07:09.368 "bdev_name": "Nvme3n1" 00:07:09.368 } 00:07:09.368 ]' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd0", 00:07:09.368 "bdev_name": "Nvme0n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd1", 00:07:09.368 "bdev_name": "Nvme1n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd10", 00:07:09.368 "bdev_name": "Nvme2n1" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd11", 00:07:09.368 "bdev_name": "Nvme2n2" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd12", 00:07:09.368 "bdev_name": "Nvme2n3" 00:07:09.368 }, 00:07:09.368 { 00:07:09.368 "nbd_device": "/dev/nbd13", 00:07:09.368 "bdev_name": "Nvme3n1" 00:07:09.368 } 00:07:09.368 ]' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.368 /dev/nbd1 00:07:09.368 /dev/nbd10 00:07:09.368 /dev/nbd11 00:07:09.368 /dev/nbd12 00:07:09.368 /dev/nbd13' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.368 /dev/nbd1 00:07:09.368 /dev/nbd10 00:07:09.368 /dev/nbd11 00:07:09.368 /dev/nbd12 00:07:09.368 /dev/nbd13' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.368 256+0 records in 00:07:09.368 256+0 records out 00:07:09.368 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00732284 s, 143 MB/s 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.368 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0562737 s, 18.6 MB/s 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0610466 s, 17.2 MB/s 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0567555 s, 18.5 MB/s 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0563821 s, 18.6 MB/s 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:09.628 256+0 records in 00:07:09.628 256+0 records out 00:07:09.628 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0542248 s, 19.3 MB/s 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.628 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:09.886 256+0 records in 00:07:09.886 256+0 records out 00:07:09.886 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0569243 s, 18.4 MB/s 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:09.886 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.143 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.144 18:17:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.400 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.401 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.401 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.657 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.658 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.914 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.172 18:17:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:11.429 malloc_lvol_verify 00:07:11.429 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:11.688 82276bc9-4d82-4aef-b2b7-2c55ebb20524 00:07:11.688 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:11.944 da102818-374e-4c22-bde8-e14c542401cf 00:07:11.944 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:12.201 /dev/nbd0 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:12.201 mke2fs 1.47.0 (5-Feb-2023) 00:07:12.201 Discarding device blocks: 0/4096 done 00:07:12.201 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:12.201 00:07:12.201 Allocating group tables: 0/1 done 00:07:12.201 Writing inode tables: 0/1 done 00:07:12.201 Creating journal (1024 blocks): done 00:07:12.201 Writing superblocks and filesystem accounting information: 0/1 done 00:07:12.201 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.201 18:18:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73336 ']' 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73336' 00:07:12.457 killing process with pid 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73336 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:12.457 00:07:12.457 real 0m8.669s 00:07:12.457 user 0m12.855s 00:07:12.457 sys 0m2.827s 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.457 18:18:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:12.457 ************************************ 00:07:12.457 END TEST bdev_nbd 00:07:12.457 ************************************ 00:07:12.714 18:18:01 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:12.714 18:18:01 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:12.714 skipping fio tests on NVMe due to multi-ns failures. 00:07:12.714 18:18:01 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:12.714 18:18:01 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:12.714 18:18:01 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.714 18:18:01 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:12.714 18:18:01 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.714 18:18:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.714 ************************************ 00:07:12.714 START TEST bdev_verify 00:07:12.714 ************************************ 00:07:12.714 18:18:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:12.714 [2024-10-08 18:18:01.383429] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:12.714 [2024-10-08 18:18:01.383551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73692 ] 00:07:12.714 [2024-10-08 18:18:01.514503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.714 [2024-10-08 18:18:01.534972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.971 [2024-10-08 18:18:01.566710] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.971 [2024-10-08 18:18:01.566774] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.228 Running I/O for 5 seconds... 00:07:15.536 22016.00 IOPS, 86.00 MiB/s [2024-10-08T18:18:05.324Z] 22016.00 IOPS, 86.00 MiB/s [2024-10-08T18:18:06.255Z] 21610.67 IOPS, 84.42 MiB/s [2024-10-08T18:18:07.196Z] 21728.00 IOPS, 84.88 MiB/s [2024-10-08T18:18:07.196Z] 22028.80 IOPS, 86.05 MiB/s 00:07:18.346 Latency(us) 00:07:18.346 [2024-10-08T18:18:07.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.346 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0xbd0bd 00:07:18.346 Nvme0n1 : 5.03 1782.48 6.96 0.00 0.00 71584.60 15123.69 70980.53 00:07:18.346 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:18.346 Nvme0n1 : 5.05 1825.98 7.13 0.00 0.00 69810.94 12351.02 75820.11 00:07:18.346 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0xa0000 00:07:18.346 Nvme1n1 : 5.06 1784.45 6.97 0.00 0.00 71309.96 7410.61 62914.56 00:07:18.346 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0xa0000 length 0xa0000 00:07:18.346 Nvme1n1 : 5.07 1831.23 7.15 0.00 0.00 69430.44 5973.86 62511.26 00:07:18.346 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0x80000 00:07:18.346 Nvme2n1 : 5.07 1792.47 7.00 0.00 0.00 71053.57 10737.82 58881.58 00:07:18.346 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x80000 length 0x80000 00:07:18.346 Nvme2n1 : 5.07 1830.71 7.15 0.00 0.00 69284.11 6452.78 59284.87 00:07:18.346 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0x80000 00:07:18.346 Nvme2n2 : 5.07 1791.96 7.00 0.00 0.00 70950.51 10485.76 59688.17 00:07:18.346 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x80000 length 0x80000 00:07:18.346 Nvme2n2 : 5.08 1839.61 7.19 0.00 0.00 68922.46 7914.73 60091.47 00:07:18.346 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0x80000 00:07:18.346 Nvme2n3 : 5.07 1791.46 7.00 0.00 0.00 70823.55 10889.06 61301.37 00:07:18.346 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x80000 length 0x80000 00:07:18.346 Nvme2n3 : 5.08 1839.12 7.18 0.00 0.00 68802.38 8116.38 60898.07 00:07:18.346 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x0 length 0x20000 00:07:18.346 Nvme3n1 : 5.07 1790.99 7.00 0.00 0.00 70691.42 10687.41 63317.86 00:07:18.346 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:18.346 Verification LBA range: start 0x20000 length 0x20000 00:07:18.346 Nvme3n1 : 5.08 1838.64 7.18 0.00 0.00 68722.69 8368.44 62107.96 00:07:18.346 [2024-10-08T18:18:07.196Z] =================================================================================================================== 00:07:18.346 [2024-10-08T18:18:07.196Z] Total : 21739.10 84.92 0.00 0.00 70101.09 5973.86 75820.11 00:07:19.728 00:07:19.728 real 0m7.201s 00:07:19.728 user 0m13.693s 00:07:19.728 sys 0m0.207s 00:07:19.728 18:18:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.728 18:18:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:19.728 ************************************ 00:07:19.728 END TEST bdev_verify 00:07:19.728 ************************************ 00:07:19.728 18:18:08 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.728 18:18:08 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:19.728 18:18:08 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.728 18:18:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:19.728 ************************************ 00:07:19.728 START TEST bdev_verify_big_io 00:07:19.728 ************************************ 00:07:19.728 18:18:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:19.987 [2024-10-08 18:18:08.630591] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:19.987 [2024-10-08 18:18:08.630701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73792 ] 00:07:19.987 [2024-10-08 18:18:08.759093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.987 [2024-10-08 18:18:08.779343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.987 [2024-10-08 18:18:08.815713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.987 [2024-10-08 18:18:08.815821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.552 Running I/O for 5 seconds... 00:07:24.377 16.00 IOPS, 1.00 MiB/s [2024-10-08T18:18:15.124Z] 1482.00 IOPS, 92.62 MiB/s [2024-10-08T18:18:15.382Z] 2200.33 IOPS, 137.52 MiB/s 00:07:26.532 Latency(us) 00:07:26.532 [2024-10-08T18:18:15.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.532 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0xbd0b 00:07:26.532 Nvme0n1 : 5.64 141.82 8.86 0.00 0.00 867463.10 21273.99 1155046.79 00:07:26.532 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:26.532 Nvme0n1 : 5.90 128.02 8.00 0.00 0.00 967957.17 14720.39 1161499.57 00:07:26.532 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0xa000 00:07:26.532 Nvme1n1 : 5.73 142.44 8.90 0.00 0.00 842075.33 85095.98 967916.31 00:07:26.532 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0xa000 length 0xa000 00:07:26.532 Nvme1n1 : 5.90 125.97 7.87 0.00 0.00 942443.64 83079.48 955010.76 00:07:26.532 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0x8000 00:07:26.532 Nvme2n1 : 5.84 148.81 9.30 0.00 0.00 775912.71 44564.48 922746.88 00:07:26.532 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x8000 length 0x8000 00:07:26.532 Nvme2n1 : 5.90 125.61 7.85 0.00 0.00 909999.98 83079.48 884030.23 00:07:26.532 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0x8000 00:07:26.532 Nvme2n2 : 5.84 153.39 9.59 0.00 0.00 733062.02 63721.16 942105.21 00:07:26.532 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x8000 length 0x8000 00:07:26.532 Nvme2n2 : 5.90 130.14 8.13 0.00 0.00 857115.83 76626.71 903388.55 00:07:26.532 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0x8000 00:07:26.532 Nvme2n3 : 5.90 162.64 10.17 0.00 0.00 669693.74 19559.98 967916.31 00:07:26.532 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x8000 length 0x8000 00:07:26.532 Nvme2n3 : 6.00 137.44 8.59 0.00 0.00 785952.10 42346.34 929199.66 00:07:26.532 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x0 length 0x2000 00:07:26.532 Nvme3n1 : 6.00 187.48 11.72 0.00 0.00 563053.52 620.70 987274.63 00:07:26.532 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:26.532 Verification LBA range: start 0x2000 length 0x2000 00:07:26.532 Nvme3n1 : 6.01 145.43 9.09 0.00 0.00 719591.80 1052.36 1606741.07 00:07:26.532 [2024-10-08T18:18:15.382Z] =================================================================================================================== 00:07:26.532 [2024-10-08T18:18:15.382Z] Total : 1729.19 108.07 0.00 0.00 789372.03 620.70 1606741.07 00:07:27.470 00:07:27.470 real 0m7.503s 00:07:27.470 user 0m14.251s 00:07:27.470 sys 0m0.239s 00:07:27.470 18:18:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.470 ************************************ 00:07:27.470 END TEST bdev_verify_big_io 00:07:27.470 ************************************ 00:07:27.470 18:18:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 18:18:16 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.470 18:18:16 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:27.470 18:18:16 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.470 18:18:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.470 ************************************ 00:07:27.470 START TEST bdev_write_zeroes 00:07:27.470 ************************************ 00:07:27.470 18:18:16 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:27.470 [2024-10-08 18:18:16.174038] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:27.470 [2024-10-08 18:18:16.174151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73892 ] 00:07:27.470 [2024-10-08 18:18:16.302864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.728 [2024-10-08 18:18:16.323921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.728 [2024-10-08 18:18:16.366661] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.987 Running I/O for 1 seconds... 00:07:29.362 57536.00 IOPS, 224.75 MiB/s 00:07:29.362 Latency(us) 00:07:29.362 [2024-10-08T18:18:18.212Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:29.362 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme0n1 : 1.02 9578.63 37.42 0.00 0.00 13331.82 5217.67 21979.77 00:07:29.362 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme1n1 : 1.02 9567.41 37.37 0.00 0.00 13332.96 9729.58 22584.71 00:07:29.362 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme2n1 : 1.02 9556.52 37.33 0.00 0.00 13302.60 9779.99 22584.71 00:07:29.362 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme2n2 : 1.03 9545.71 37.29 0.00 0.00 13230.92 6024.27 22181.42 00:07:29.362 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme2n3 : 1.03 9534.82 37.25 0.00 0.00 13224.58 5747.00 21374.82 00:07:29.362 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:29.362 Nvme3n1 : 1.03 9461.64 36.96 0.00 0.00 13309.59 9124.63 22181.42 00:07:29.362 [2024-10-08T18:18:18.212Z] =================================================================================================================== 00:07:29.362 [2024-10-08T18:18:18.212Z] Total : 57244.72 223.61 0.00 0.00 13288.72 5217.67 22584.71 00:07:29.362 00:07:29.362 real 0m1.913s 00:07:29.362 user 0m1.590s 00:07:29.362 sys 0m0.212s 00:07:29.362 18:18:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.362 18:18:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 ************************************ 00:07:29.362 END TEST bdev_write_zeroes 00:07:29.362 ************************************ 00:07:29.362 18:18:18 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.362 18:18:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.362 18:18:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.362 18:18:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.362 ************************************ 00:07:29.362 START TEST bdev_json_nonenclosed 00:07:29.362 ************************************ 00:07:29.362 18:18:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.362 [2024-10-08 18:18:18.124734] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:29.362 [2024-10-08 18:18:18.124851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73934 ] 00:07:29.621 [2024-10-08 18:18:18.253038] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.621 [2024-10-08 18:18:18.271965] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.621 [2024-10-08 18:18:18.313894] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.621 [2024-10-08 18:18:18.313989] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:29.621 [2024-10-08 18:18:18.314008] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.621 [2024-10-08 18:18:18.314017] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.621 00:07:29.621 real 0m0.338s 00:07:29.621 user 0m0.139s 00:07:29.621 sys 0m0.095s 00:07:29.621 18:18:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.621 18:18:18 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 ************************************ 00:07:29.621 END TEST bdev_json_nonenclosed 00:07:29.621 ************************************ 00:07:29.621 18:18:18 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.621 18:18:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:29.621 18:18:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.621 18:18:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.621 ************************************ 00:07:29.621 START TEST bdev_json_nonarray 00:07:29.621 ************************************ 00:07:29.621 18:18:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.880 [2024-10-08 18:18:18.511503] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:29.880 [2024-10-08 18:18:18.511623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73956 ] 00:07:29.880 [2024-10-08 18:18:18.639892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.880 [2024-10-08 18:18:18.662120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.880 [2024-10-08 18:18:18.703343] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.880 [2024-10-08 18:18:18.703444] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:29.880 [2024-10-08 18:18:18.703463] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.880 [2024-10-08 18:18:18.703473] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.157 00:07:30.157 real 0m0.344s 00:07:30.157 user 0m0.139s 00:07:30.157 sys 0m0.101s 00:07:30.157 18:18:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.157 18:18:18 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:30.157 ************************************ 00:07:30.157 END TEST bdev_json_nonarray 00:07:30.157 ************************************ 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:30.157 18:18:18 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:30.157 ************************************ 00:07:30.157 END TEST blockdev_nvme 00:07:30.157 ************************************ 00:07:30.157 00:07:30.157 real 0m30.400s 00:07:30.157 user 0m48.560s 00:07:30.157 sys 0m4.939s 00:07:30.157 18:18:18 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.158 18:18:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.158 18:18:18 -- spdk/autotest.sh@209 -- # uname -s 00:07:30.158 18:18:18 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:30.158 18:18:18 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.158 18:18:18 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.158 18:18:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.158 18:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:30.158 ************************************ 00:07:30.158 START TEST blockdev_nvme_gpt 00:07:30.158 ************************************ 00:07:30.158 18:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:30.158 * Looking for test storage... 00:07:30.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:30.158 18:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:30.158 18:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:30.158 18:18:18 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:30.429 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.429 18:18:19 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:30.429 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.429 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:30.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.429 --rc genhtml_branch_coverage=1 00:07:30.429 --rc genhtml_function_coverage=1 00:07:30.429 --rc genhtml_legend=1 00:07:30.429 --rc geninfo_all_blocks=1 00:07:30.429 --rc geninfo_unexecuted_blocks=1 00:07:30.429 00:07:30.429 ' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:30.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.430 --rc genhtml_branch_coverage=1 00:07:30.430 --rc genhtml_function_coverage=1 00:07:30.430 --rc genhtml_legend=1 00:07:30.430 --rc geninfo_all_blocks=1 00:07:30.430 --rc geninfo_unexecuted_blocks=1 00:07:30.430 00:07:30.430 ' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:30.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.430 --rc genhtml_branch_coverage=1 00:07:30.430 --rc genhtml_function_coverage=1 00:07:30.430 --rc genhtml_legend=1 00:07:30.430 --rc geninfo_all_blocks=1 00:07:30.430 --rc geninfo_unexecuted_blocks=1 00:07:30.430 00:07:30.430 ' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:30.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.430 --rc genhtml_branch_coverage=1 00:07:30.430 --rc genhtml_function_coverage=1 00:07:30.430 --rc genhtml_legend=1 00:07:30.430 --rc geninfo_all_blocks=1 00:07:30.430 --rc geninfo_unexecuted_blocks=1 00:07:30.430 00:07:30.430 ' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:30.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74040 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74040 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 74040 ']' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.430 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.430 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.430 [2024-10-08 18:18:19.097055] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:30.430 [2024-10-08 18:18:19.097168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74040 ] 00:07:30.430 [2024-10-08 18:18:19.226210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.430 [2024-10-08 18:18:19.246088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.688 [2024-10-08 18:18:19.289748] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.254 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.254 18:18:19 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:31.254 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:31.254 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:31.255 18:18:19 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.513 Waiting for block devices as requested 00:07:31.771 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.771 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.771 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.771 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.037 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:37.037 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:37.037 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:37.038 BYT; 00:07:37.038 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:37.038 BYT; 00:07:37.038 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.038 18:18:25 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:37.038 18:18:25 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:37.973 The operation has completed successfully. 00:07:37.973 18:18:26 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:39.348 The operation has completed successfully. 00:07:39.348 18:18:27 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.349 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.955 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.955 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.955 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:39.955 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:40.215 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.215 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.215 [] 00:07:40.215 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:40.215 18:18:28 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:40.215 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.215 18:18:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.474 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.474 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:40.474 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.474 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.474 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4ff1fdbd-1812-4501-b41b-468e00722c81"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4ff1fdbd-1812-4501-b41b-468e00722c81",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "96280069-f2f0-42d2-bf2b-f092e5da24d7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "96280069-f2f0-42d2-bf2b-f092e5da24d7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8e11f0fe-907d-47d4-96dc-e715a97e1b48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e11f0fe-907d-47d4-96dc-e715a97e1b48",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "6f2f8356-b6d2-4022-96e2-5b87ad9b5994"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6f2f8356-b6d2-4022-96e2-5b87ad9b5994",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5ff5bbc6-03d8-4004-a19a-8137ea30a314"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5ff5bbc6-03d8-4004-a19a-8137ea30a314",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:40.475 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74040 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 74040 ']' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 74040 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74040 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:40.475 killing process with pid 74040 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74040' 00:07:40.475 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 74040 00:07:40.476 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 74040 00:07:40.734 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:40.734 18:18:29 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.734 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:40.734 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.734 18:18:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:40.734 ************************************ 00:07:40.734 START TEST bdev_hello_world 00:07:40.734 ************************************ 00:07:40.734 18:18:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:40.993 [2024-10-08 18:18:29.605968] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:40.993 [2024-10-08 18:18:29.606082] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74642 ] 00:07:40.993 [2024-10-08 18:18:29.733248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.993 [2024-10-08 18:18:29.752658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.993 [2024-10-08 18:18:29.783715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.559 [2024-10-08 18:18:30.148428] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.559 [2024-10-08 18:18:30.148470] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:41.559 [2024-10-08 18:18:30.148494] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.559 [2024-10-08 18:18:30.150658] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.559 [2024-10-08 18:18:30.151298] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.559 [2024-10-08 18:18:30.151337] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.559 [2024-10-08 18:18:30.151692] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.559 00:07:41.559 [2024-10-08 18:18:30.151725] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.559 00:07:41.559 real 0m0.767s 00:07:41.559 user 0m0.509s 00:07:41.559 sys 0m0.153s 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.560 ************************************ 00:07:41.560 END TEST bdev_hello_world 00:07:41.560 ************************************ 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:41.560 18:18:30 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.560 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:41.560 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.560 18:18:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:41.560 ************************************ 00:07:41.560 START TEST bdev_bounds 00:07:41.560 ************************************ 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74668 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.560 Process bdevio pid: 74668 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74668' 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74668 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74668 ']' 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:41.560 18:18:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:41.818 [2024-10-08 18:18:30.425392] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:41.818 [2024-10-08 18:18:30.425518] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74668 ] 00:07:41.818 [2024-10-08 18:18:30.557376] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.818 [2024-10-08 18:18:30.575826] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.818 [2024-10-08 18:18:30.610933] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.818 [2024-10-08 18:18:30.611227] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.818 [2024-10-08 18:18:30.611296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.753 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.753 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:42.753 18:18:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.753 I/O targets: 00:07:42.753 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:42.753 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:42.753 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:42.753 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.753 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.753 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:42.753 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:42.753 00:07:42.753 00:07:42.753 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.753 http://cunit.sourceforge.net/ 00:07:42.753 00:07:42.753 00:07:42.753 Suite: bdevio tests on: Nvme3n1 00:07:42.753 Test: blockdev write read block ...passed 00:07:42.753 Test: blockdev write zeroes read block ...passed 00:07:42.753 Test: blockdev write zeroes read no split ...passed 00:07:42.753 Test: blockdev write zeroes read split ...passed 00:07:42.753 Test: blockdev write zeroes read split partial ...passed 00:07:42.753 Test: blockdev reset ...[2024-10-08 18:18:31.383417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:42.753 [2024-10-08 18:18:31.386956] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.753 passed 00:07:42.753 Test: blockdev write read 8 blocks ...passed 00:07:42.753 Test: blockdev write read size > 128k ...passed 00:07:42.753 Test: blockdev write read invalid size ...passed 00:07:42.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.753 Test: blockdev write read max offset ...passed 00:07:42.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.753 Test: blockdev writev readv 8 blocks ...passed 00:07:42.753 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.753 Test: blockdev writev readv block ...passed 00:07:42.753 Test: blockdev writev readv size > 128k ...passed 00:07:42.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.753 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.393569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dc20e000 len:0x1000 00:07:42.753 [2024-10-08 18:18:31.393621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.753 passed 00:07:42.753 Test: blockdev nvme passthru rw ...passed 00:07:42.753 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:31.394501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.753 [2024-10-08 18:18:31.394534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.753 passed 00:07:42.753 Test: blockdev nvme admin passthru ...passed 00:07:42.753 Test: blockdev copy ...passed 00:07:42.753 Suite: bdevio tests on: Nvme2n3 00:07:42.753 Test: blockdev write read block ...passed 00:07:42.753 Test: blockdev write zeroes read block ...passed 00:07:42.753 Test: blockdev write zeroes read no split ...passed 00:07:42.753 Test: blockdev write zeroes read split ...passed 00:07:42.753 Test: blockdev write zeroes read split partial ...passed 00:07:42.753 Test: blockdev reset ...[2024-10-08 18:18:31.410615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.753 [2024-10-08 18:18:31.412708] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.753 passed 00:07:42.753 Test: blockdev write read 8 blocks ...passed 00:07:42.753 Test: blockdev write read size > 128k ...passed 00:07:42.753 Test: blockdev write read invalid size ...passed 00:07:42.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.753 Test: blockdev write read max offset ...passed 00:07:42.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.753 Test: blockdev writev readv 8 blocks ...passed 00:07:42.754 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.754 Test: blockdev writev readv block ...passed 00:07:42.754 Test: blockdev writev readv size > 128k ...passed 00:07:42.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.754 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.418072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dc20a000 len:0x1000 00:07:42.754 [2024-10-08 18:18:31.418115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme passthru rw ...passed 00:07:42.754 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:31.418925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.754 [2024-10-08 18:18:31.418953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme admin passthru ...passed 00:07:42.754 Test: blockdev copy ...passed 00:07:42.754 Suite: bdevio tests on: Nvme2n2 00:07:42.754 Test: blockdev write read block ...passed 00:07:42.754 Test: blockdev write zeroes read block ...passed 00:07:42.754 Test: blockdev write zeroes read no split ...passed 00:07:42.754 Test: blockdev write zeroes read split ...passed 00:07:42.754 Test: blockdev write zeroes read split partial ...passed 00:07:42.754 Test: blockdev reset ...[2024-10-08 18:18:31.433205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.754 [2024-10-08 18:18:31.435021] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.754 passed 00:07:42.754 Test: blockdev write read 8 blocks ...passed 00:07:42.754 Test: blockdev write read size > 128k ...passed 00:07:42.754 Test: blockdev write read invalid size ...passed 00:07:42.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.754 Test: blockdev write read max offset ...passed 00:07:42.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.754 Test: blockdev writev readv 8 blocks ...passed 00:07:42.754 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.754 Test: blockdev writev readv block ...passed 00:07:42.754 Test: blockdev writev readv size > 128k ...passed 00:07:42.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.754 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.440547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d9805000 len:0x1000 00:07:42.754 [2024-10-08 18:18:31.440587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme passthru rw ...passed 00:07:42.754 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:31.441531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.754 [2024-10-08 18:18:31.441557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme admin passthru ...passed 00:07:42.754 Test: blockdev copy ...passed 00:07:42.754 Suite: bdevio tests on: Nvme2n1 00:07:42.754 Test: blockdev write read block ...passed 00:07:42.754 Test: blockdev write zeroes read block ...passed 00:07:42.754 Test: blockdev write zeroes read no split ...passed 00:07:42.754 Test: blockdev write zeroes read split ...passed 00:07:42.754 Test: blockdev write zeroes read split partial ...passed 00:07:42.754 Test: blockdev reset ...[2024-10-08 18:18:31.470371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:42.754 [2024-10-08 18:18:31.472356] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.754 passed 00:07:42.754 Test: blockdev write read 8 blocks ...passed 00:07:42.754 Test: blockdev write read size > 128k ...passed 00:07:42.754 Test: blockdev write read invalid size ...passed 00:07:42.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.754 Test: blockdev write read max offset ...passed 00:07:42.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.754 Test: blockdev writev readv 8 blocks ...passed 00:07:42.754 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.754 Test: blockdev writev readv block ...passed 00:07:42.754 Test: blockdev writev readv size > 128k ...passed 00:07:42.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.754 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.479231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dc602000 len:0x1000 00:07:42.754 [2024-10-08 18:18:31.479272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme passthru rw ...passed 00:07:42.754 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:31.479877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:42.754 [2024-10-08 18:18:31.479903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme admin passthru ...passed 00:07:42.754 Test: blockdev copy ...passed 00:07:42.754 Suite: bdevio tests on: Nvme1n1p2 00:07:42.754 Test: blockdev write read block ...passed 00:07:42.754 Test: blockdev write zeroes read block ...passed 00:07:42.754 Test: blockdev write zeroes read no split ...passed 00:07:42.754 Test: blockdev write zeroes read split ...passed 00:07:42.754 Test: blockdev write zeroes read split partial ...passed 00:07:42.754 Test: blockdev reset ...[2024-10-08 18:18:31.506573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.754 [2024-10-08 18:18:31.508271] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.754 passed 00:07:42.754 Test: blockdev write read 8 blocks ...passed 00:07:42.754 Test: blockdev write read size > 128k ...passed 00:07:42.754 Test: blockdev write read invalid size ...passed 00:07:42.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.754 Test: blockdev write read max offset ...passed 00:07:42.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.754 Test: blockdev writev readv 8 blocks ...passed 00:07:42.754 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.754 Test: blockdev writev readv block ...passed 00:07:42.754 Test: blockdev writev readv size > 128k ...passed 00:07:42.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.754 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.513727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dd23b000 len:0x1000 00:07:42.754 [2024-10-08 18:18:31.513775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme passthru rw ...passed 00:07:42.754 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.754 Test: blockdev nvme admin passthru ...passed 00:07:42.754 Test: blockdev copy ...passed 00:07:42.754 Suite: bdevio tests on: Nvme1n1p1 00:07:42.754 Test: blockdev write read block ...passed 00:07:42.754 Test: blockdev write zeroes read block ...passed 00:07:42.754 Test: blockdev write zeroes read no split ...passed 00:07:42.754 Test: blockdev write zeroes read split ...passed 00:07:42.754 Test: blockdev write zeroes read split partial ...passed 00:07:42.754 Test: blockdev reset ...[2024-10-08 18:18:31.540558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:42.754 [2024-10-08 18:18:31.542137] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:42.754 passed 00:07:42.754 Test: blockdev write read 8 blocks ...passed 00:07:42.754 Test: blockdev write read size > 128k ...passed 00:07:42.754 Test: blockdev write read invalid size ...passed 00:07:42.754 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.754 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.754 Test: blockdev write read max offset ...passed 00:07:42.754 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.754 Test: blockdev writev readv 8 blocks ...passed 00:07:42.754 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.754 Test: blockdev writev readv block ...passed 00:07:42.754 Test: blockdev writev readv size > 128k ...passed 00:07:42.754 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.754 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.548599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2dd237000 len:0x1000 00:07:42.754 [2024-10-08 18:18:31.548708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:42.754 passed 00:07:42.754 Test: blockdev nvme passthru rw ...passed 00:07:42.754 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.754 Test: blockdev nvme admin passthru ...passed 00:07:42.754 Test: blockdev copy ...passed 00:07:42.754 Suite: bdevio tests on: Nvme0n1 00:07:42.754 Test: blockdev write read block ...passed 00:07:42.754 Test: blockdev write zeroes read block ...passed 00:07:43.016 Test: blockdev write zeroes read no split ...passed 00:07:43.016 Test: blockdev write zeroes read split ...passed 00:07:43.016 Test: blockdev write zeroes read split partial ...passed 00:07:43.016 Test: blockdev reset ...[2024-10-08 18:18:31.656119] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:43.016 [2024-10-08 18:18:31.657872] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:43.016 passed 00:07:43.016 Test: blockdev write read 8 blocks ...passed 00:07:43.016 Test: blockdev write read size > 128k ...passed 00:07:43.016 Test: blockdev write read invalid size ...passed 00:07:43.016 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.016 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.016 Test: blockdev write read max offset ...passed 00:07:43.016 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.016 Test: blockdev writev readv 8 blocks ...passed 00:07:43.016 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.016 Test: blockdev writev readv block ...passed 00:07:43.016 Test: blockdev writev readv size > 128k ...passed 00:07:43.016 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.016 Test: blockdev comparev and writev ...[2024-10-08 18:18:31.663104] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:43.016 separate metadata which is not supported yet. 00:07:43.016 passed 00:07:43.016 Test: blockdev nvme passthru rw ...passed 00:07:43.016 Test: blockdev nvme passthru vendor specific ...[2024-10-08 18:18:31.663708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:43.016 [2024-10-08 18:18:31.663767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:43.016 passed 00:07:43.016 Test: blockdev nvme admin passthru ...passed 00:07:43.016 Test: blockdev copy ...passed 00:07:43.016 00:07:43.016 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.016 suites 7 7 n/a 0 0 00:07:43.016 tests 161 161 161 0 0 00:07:43.016 asserts 1025 1025 1025 0 n/a 00:07:43.016 00:07:43.016 Elapsed time = 0.686 seconds 00:07:43.016 0 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74668 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74668 ']' 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74668 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74668 00:07:43.016 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:43.017 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:43.017 killing process with pid 74668 00:07:43.017 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74668' 00:07:43.017 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74668 00:07:43.017 18:18:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74668 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:43.951 00:07:43.951 real 0m2.347s 00:07:43.951 user 0m6.225s 00:07:43.951 sys 0m0.299s 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:43.951 ************************************ 00:07:43.951 END TEST bdev_bounds 00:07:43.951 ************************************ 00:07:43.951 18:18:32 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.951 18:18:32 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:43.951 18:18:32 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.951 18:18:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:43.951 ************************************ 00:07:43.951 START TEST bdev_nbd 00:07:43.951 ************************************ 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74727 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74727 /var/tmp/spdk-nbd.sock 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74727 ']' 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.951 18:18:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:44.210 [2024-10-08 18:18:32.811430] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:44.210 [2024-10-08 18:18:32.811537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.210 [2024-10-08 18:18:32.940448] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.210 [2024-10-08 18:18:32.959312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.210 [2024-10-08 18:18:32.992240] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.143 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.144 1+0 records in 00:07:45.144 1+0 records out 00:07:45.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290951 s, 14.1 MB/s 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.144 18:18:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.402 1+0 records in 00:07:45.402 1+0 records out 00:07:45.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488546 s, 8.4 MB/s 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.402 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.661 1+0 records in 00:07:45.661 1+0 records out 00:07:45.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412121 s, 9.9 MB/s 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.661 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.930 1+0 records in 00:07:45.930 1+0 records out 00:07:45.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004311 s, 9.5 MB/s 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:45.930 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:46.196 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:46.196 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:46.196 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:46.196 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:46.196 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.197 1+0 records in 00:07:46.197 1+0 records out 00:07:46.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365071 s, 11.2 MB/s 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.197 18:18:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.197 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.456 1+0 records in 00:07:46.456 1+0 records out 00:07:46.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465016 s, 8.8 MB/s 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.456 1+0 records in 00:07:46.456 1+0 records out 00:07:46.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559289 s, 7.3 MB/s 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:46.456 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd0", 00:07:46.715 "bdev_name": "Nvme0n1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd1", 00:07:46.715 "bdev_name": "Nvme1n1p1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd2", 00:07:46.715 "bdev_name": "Nvme1n1p2" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd3", 00:07:46.715 "bdev_name": "Nvme2n1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd4", 00:07:46.715 "bdev_name": "Nvme2n2" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd5", 00:07:46.715 "bdev_name": "Nvme2n3" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd6", 00:07:46.715 "bdev_name": "Nvme3n1" 00:07:46.715 } 00:07:46.715 ]' 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd0", 00:07:46.715 "bdev_name": "Nvme0n1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd1", 00:07:46.715 "bdev_name": "Nvme1n1p1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd2", 00:07:46.715 "bdev_name": "Nvme1n1p2" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd3", 00:07:46.715 "bdev_name": "Nvme2n1" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd4", 00:07:46.715 "bdev_name": "Nvme2n2" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd5", 00:07:46.715 "bdev_name": "Nvme2n3" 00:07:46.715 }, 00:07:46.715 { 00:07:46.715 "nbd_device": "/dev/nbd6", 00:07:46.715 "bdev_name": "Nvme3n1" 00:07:46.715 } 00:07:46.715 ]' 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.715 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:46.973 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.232 18:18:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:47.232 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.232 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.232 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.232 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.490 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:47.749 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.007 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.265 18:18:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:48.524 /dev/nbd0 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.524 1+0 records in 00:07:48.524 1+0 records out 00:07:48.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050758 s, 8.1 MB/s 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.524 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:48.524 /dev/nbd1 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.782 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.783 1+0 records in 00:07:48.783 1+0 records out 00:07:48.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276667 s, 14.8 MB/s 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:48.783 /dev/nbd10 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:48.783 1+0 records in 00:07:48.783 1+0 records out 00:07:48.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485751 s, 8.4 MB/s 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:48.783 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:49.042 /dev/nbd11 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.042 1+0 records in 00:07:49.042 1+0 records out 00:07:49.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465411 s, 8.8 MB/s 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.042 18:18:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:49.301 /dev/nbd12 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.301 1+0 records in 00:07:49.301 1+0 records out 00:07:49.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536532 s, 7.6 MB/s 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.301 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:49.562 /dev/nbd13 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.562 1+0 records in 00:07:49.562 1+0 records out 00:07:49.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000708935 s, 5.8 MB/s 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.562 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:49.824 /dev/nbd14 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:49.824 1+0 records in 00:07:49.824 1+0 records out 00:07:49.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339786 s, 12.1 MB/s 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.824 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd0", 00:07:50.086 "bdev_name": "Nvme0n1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd1", 00:07:50.086 "bdev_name": "Nvme1n1p1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd10", 00:07:50.086 "bdev_name": "Nvme1n1p2" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd11", 00:07:50.086 "bdev_name": "Nvme2n1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd12", 00:07:50.086 "bdev_name": "Nvme2n2" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd13", 00:07:50.086 "bdev_name": "Nvme2n3" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd14", 00:07:50.086 "bdev_name": "Nvme3n1" 00:07:50.086 } 00:07:50.086 ]' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd0", 00:07:50.086 "bdev_name": "Nvme0n1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd1", 00:07:50.086 "bdev_name": "Nvme1n1p1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd10", 00:07:50.086 "bdev_name": "Nvme1n1p2" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd11", 00:07:50.086 "bdev_name": "Nvme2n1" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd12", 00:07:50.086 "bdev_name": "Nvme2n2" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd13", 00:07:50.086 "bdev_name": "Nvme2n3" 00:07:50.086 }, 00:07:50.086 { 00:07:50.086 "nbd_device": "/dev/nbd14", 00:07:50.086 "bdev_name": "Nvme3n1" 00:07:50.086 } 00:07:50.086 ]' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:50.086 /dev/nbd1 00:07:50.086 /dev/nbd10 00:07:50.086 /dev/nbd11 00:07:50.086 /dev/nbd12 00:07:50.086 /dev/nbd13 00:07:50.086 /dev/nbd14' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:50.086 /dev/nbd1 00:07:50.086 /dev/nbd10 00:07:50.086 /dev/nbd11 00:07:50.086 /dev/nbd12 00:07:50.086 /dev/nbd13 00:07:50.086 /dev/nbd14' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:50.086 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:50.087 256+0 records in 00:07:50.087 256+0 records out 00:07:50.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505366 s, 207 MB/s 00:07:50.087 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.087 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:50.349 256+0 records in 00:07:50.349 256+0 records out 00:07:50.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173884 s, 6.0 MB/s 00:07:50.349 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.349 18:18:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:50.349 256+0 records in 00:07:50.349 256+0 records out 00:07:50.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130797 s, 8.0 MB/s 00:07:50.349 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.349 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:50.349 256+0 records in 00:07:50.349 256+0 records out 00:07:50.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0899132 s, 11.7 MB/s 00:07:50.349 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.349 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:50.608 256+0 records in 00:07:50.608 256+0 records out 00:07:50.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163469 s, 6.4 MB/s 00:07:50.608 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.608 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:50.608 256+0 records in 00:07:50.608 256+0 records out 00:07:50.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0972051 s, 10.8 MB/s 00:07:50.871 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.871 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:50.871 256+0 records in 00:07:50.871 256+0 records out 00:07:50.871 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175194 s, 6.0 MB/s 00:07:50.871 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.871 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:51.130 256+0 records in 00:07:51.130 256+0 records out 00:07:51.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137072 s, 7.6 MB/s 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:51.130 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.131 18:18:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.389 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.647 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:51.907 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.165 18:18:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:52.423 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.683 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:52.944 malloc_lvol_verify 00:07:52.944 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:53.203 a1393087-2694-4692-8c5e-4f1c77570e3a 00:07:53.203 18:18:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:53.461 135334a7-48ba-40c3-a3a6-367b02247104 00:07:53.461 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:53.720 /dev/nbd0 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:53.720 mke2fs 1.47.0 (5-Feb-2023) 00:07:53.720 Discarding device blocks: 0/4096 done 00:07:53.720 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:53.720 00:07:53.720 Allocating group tables: 0/1 done 00:07:53.720 Writing inode tables: 0/1 done 00:07:53.720 Creating journal (1024 blocks): done 00:07:53.720 Writing superblocks and filesystem accounting information: 0/1 done 00:07:53.720 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:53.720 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74727 ']' 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.978 killing process with pid 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74727' 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74727 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:53.978 00:07:53.978 real 0m10.027s 00:07:53.978 user 0m14.218s 00:07:53.978 sys 0m3.479s 00:07:53.978 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.978 ************************************ 00:07:53.979 END TEST bdev_nbd 00:07:53.979 ************************************ 00:07:53.979 18:18:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:53.979 skipping fio tests on NVMe due to multi-ns failures. 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:53.979 18:18:42 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:53.979 18:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:53.979 18:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.979 18:18:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.979 ************************************ 00:07:53.979 START TEST bdev_verify 00:07:53.979 ************************************ 00:07:53.979 18:18:42 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:54.237 [2024-10-08 18:18:42.868988] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:07:54.237 [2024-10-08 18:18:42.869076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75126 ] 00:07:54.237 [2024-10-08 18:18:42.991466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.237 [2024-10-08 18:18:43.008862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:54.237 [2024-10-08 18:18:43.039200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.237 [2024-10-08 18:18:43.039278] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.802 Running I/O for 5 seconds... 00:07:57.111 25472.00 IOPS, 99.50 MiB/s [2024-10-08T18:18:46.902Z] 25088.00 IOPS, 98.00 MiB/s [2024-10-08T18:18:47.836Z] 24320.00 IOPS, 95.00 MiB/s [2024-10-08T18:18:48.769Z] 23824.00 IOPS, 93.06 MiB/s [2024-10-08T18:18:48.769Z] 23488.00 IOPS, 91.75 MiB/s 00:07:59.919 Latency(us) 00:07:59.919 [2024-10-08T18:18:48.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.919 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0xbd0bd 00:07:59.919 Nvme0n1 : 5.06 1569.11 6.13 0.00 0.00 81210.77 17543.48 73803.62 00:07:59.919 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:59.919 Nvme0n1 : 5.06 1732.23 6.77 0.00 0.00 73526.51 13712.15 70577.23 00:07:59.919 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x4ff80 00:07:59.919 Nvme1n1p1 : 5.06 1568.59 6.13 0.00 0.00 81051.17 18350.08 70173.93 00:07:59.919 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:59.919 Nvme1n1p1 : 5.08 1738.00 6.79 0.00 0.00 73389.81 16938.54 69367.34 00:07:59.919 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x4ff7f 00:07:59.919 Nvme1n1p2 : 5.08 1574.16 6.15 0.00 0.00 80635.93 8318.03 69367.34 00:07:59.919 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:59.919 Nvme1n1p2 : 5.09 1736.18 6.78 0.00 0.00 73342.65 18753.38 67754.14 00:07:59.919 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x80000 00:07:59.919 Nvme2n1 : 5.09 1573.16 6.15 0.00 0.00 80493.33 9830.40 65737.65 00:07:59.919 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x80000 length 0x80000 00:07:59.919 Nvme2n1 : 5.09 1735.64 6.78 0.00 0.00 73283.19 17745.13 65334.35 00:07:59.919 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x80000 00:07:59.919 Nvme2n2 : 5.10 1582.59 6.18 0.00 0.00 80089.48 7612.26 66544.25 00:07:59.919 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x80000 length 0x80000 00:07:59.919 Nvme2n2 : 5.09 1735.16 6.78 0.00 0.00 73205.45 17442.66 66140.95 00:07:59.919 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x80000 00:07:59.919 Nvme2n3 : 5.10 1582.15 6.18 0.00 0.00 80023.95 7713.08 70173.93 00:07:59.919 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x80000 length 0x80000 00:07:59.919 Nvme2n3 : 5.09 1734.72 6.78 0.00 0.00 73145.82 16434.41 69367.34 00:07:59.919 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x0 length 0x20000 00:07:59.919 Nvme3n1 : 5.10 1581.72 6.18 0.00 0.00 79979.89 8015.56 73400.32 00:07:59.919 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:59.919 Verification LBA range: start 0x20000 length 0x20000 00:07:59.919 Nvme3n1 : 5.09 1734.26 6.77 0.00 0.00 73068.44 15426.17 70980.53 00:07:59.919 [2024-10-08T18:18:48.769Z] =================================================================================================================== 00:07:59.919 [2024-10-08T18:18:48.769Z] Total : 23177.66 90.54 0.00 0.00 76712.98 7612.26 73803.62 00:08:00.484 00:08:00.484 real 0m6.368s 00:08:00.484 user 0m12.049s 00:08:00.484 sys 0m0.190s 00:08:00.484 18:18:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.484 ************************************ 00:08:00.484 18:18:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:00.484 END TEST bdev_verify 00:08:00.484 ************************************ 00:08:00.484 18:18:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:00.484 18:18:49 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:00.484 18:18:49 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.484 18:18:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.484 ************************************ 00:08:00.484 START TEST bdev_verify_big_io 00:08:00.484 ************************************ 00:08:00.484 18:18:49 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:00.485 [2024-10-08 18:18:49.294070] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:00.485 [2024-10-08 18:18:49.294204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75213 ] 00:08:00.742 [2024-10-08 18:18:49.425254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.742 [2024-10-08 18:18:49.444000] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:00.742 [2024-10-08 18:18:49.487458] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.742 [2024-10-08 18:18:49.487500] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.308 Running I/O for 5 seconds... 00:08:07.468 969.00 IOPS, 60.56 MiB/s [2024-10-08T18:18:56.885Z] 2904.00 IOPS, 181.50 MiB/s [2024-10-08T18:18:56.885Z] 3347.67 IOPS, 209.23 MiB/s 00:08:08.035 Latency(us) 00:08:08.035 [2024-10-08T18:18:56.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.035 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0xbd0b 00:08:08.035 Nvme0n1 : 5.84 105.19 6.57 0.00 0.00 1159216.65 24702.03 1335724.50 00:08:08.035 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:08.035 Nvme0n1 : 6.13 67.66 4.23 0.00 0.00 1708345.56 22988.01 2064888.12 00:08:08.035 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x4ff8 00:08:08.035 Nvme1n1p1 : 5.84 104.30 6.52 0.00 0.00 1125046.75 97194.93 1135688.47 00:08:08.035 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:08.035 Nvme1n1p1 : 6.19 77.55 4.85 0.00 0.00 1472456.97 53638.70 1651910.50 00:08:08.035 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x4ff7 00:08:08.035 Nvme1n1p2 : 5.84 109.57 6.85 0.00 0.00 1058122.83 98001.53 1135688.47 00:08:08.035 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:08.035 Nvme1n1p2 : 6.23 82.20 5.14 0.00 0.00 1302040.81 36901.81 1367988.38 00:08:08.035 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x8000 00:08:08.035 Nvme2n1 : 5.96 112.19 7.01 0.00 0.00 1000883.09 119376.34 1180857.90 00:08:08.035 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x8000 length 0x8000 00:08:08.035 Nvme2n1 : 6.28 91.75 5.73 0.00 0.00 1114109.55 22282.24 1406705.03 00:08:08.035 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x8000 00:08:08.035 Nvme2n2 : 6.09 121.66 7.60 0.00 0.00 896507.85 31658.93 1135688.47 00:08:08.035 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x8000 length 0x8000 00:08:08.035 Nvme2n2 : 6.41 116.39 7.27 0.00 0.00 861592.43 20669.05 1432516.14 00:08:08.035 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x8000 00:08:08.035 Nvme2n3 : 6.09 126.05 7.88 0.00 0.00 840825.30 42951.29 1161499.57 00:08:08.035 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x8000 length 0x8000 00:08:08.035 Nvme2n3 : 6.49 142.02 8.88 0.00 0.00 677511.89 6553.60 2219754.73 00:08:08.035 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x0 length 0x2000 00:08:08.035 Nvme3n1 : 6.16 145.35 9.08 0.00 0.00 706565.66 724.68 1200216.22 00:08:08.035 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:08.035 Verification LBA range: start 0x2000 length 0x2000 00:08:08.035 Nvme3n1 : 6.77 281.45 17.59 0.00 0.00 326815.99 215.83 3329632.10 00:08:08.035 [2024-10-08T18:18:56.885Z] =================================================================================================================== 00:08:08.035 [2024-10-08T18:18:56.885Z] Total : 1683.32 105.21 0.00 0.00 886470.28 215.83 3329632.10 00:08:08.969 00:08:08.969 real 0m8.357s 00:08:08.969 user 0m15.903s 00:08:08.969 sys 0m0.251s 00:08:08.969 18:18:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.969 18:18:57 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:08.969 ************************************ 00:08:08.969 END TEST bdev_verify_big_io 00:08:08.969 ************************************ 00:08:08.969 18:18:57 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.969 18:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:08.969 18:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.969 18:18:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.969 ************************************ 00:08:08.969 START TEST bdev_write_zeroes 00:08:08.969 ************************************ 00:08:08.969 18:18:57 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:08.969 [2024-10-08 18:18:57.684722] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:08.969 [2024-10-08 18:18:57.684826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75326 ] 00:08:08.969 [2024-10-08 18:18:57.807503] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.227 [2024-10-08 18:18:57.828766] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.227 [2024-10-08 18:18:57.875128] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.484 Running I/O for 1 seconds... 00:08:10.853 62208.00 IOPS, 243.00 MiB/s 00:08:10.853 Latency(us) 00:08:10.853 [2024-10-08T18:18:59.703Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.853 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme0n1 : 1.03 8860.24 34.61 0.00 0.00 14414.18 9275.86 25004.50 00:08:10.853 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme1n1p1 : 1.03 8849.34 34.57 0.00 0.00 14407.94 9830.40 24702.03 00:08:10.853 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme1n1p2 : 1.03 8838.40 34.53 0.00 0.00 14388.89 9225.45 23794.61 00:08:10.853 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme2n1 : 1.03 8828.44 34.49 0.00 0.00 14382.15 8771.74 23088.84 00:08:10.853 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme2n2 : 1.03 8818.44 34.45 0.00 0.00 14363.72 7360.20 22584.71 00:08:10.853 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme2n3 : 1.03 8808.52 34.41 0.00 0.00 14355.86 7360.20 23592.96 00:08:10.853 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:10.853 Nvme3n1 : 1.03 8736.63 34.13 0.00 0.00 14455.87 10284.11 25105.33 00:08:10.853 [2024-10-08T18:18:59.703Z] =================================================================================================================== 00:08:10.853 [2024-10-08T18:18:59.703Z] Total : 61740.02 241.17 0.00 0.00 14395.45 7360.20 25105.33 00:08:10.853 00:08:10.853 real 0m1.905s 00:08:10.853 user 0m1.612s 00:08:10.853 sys 0m0.182s 00:08:10.854 18:18:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.854 18:18:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:10.854 ************************************ 00:08:10.854 END TEST bdev_write_zeroes 00:08:10.854 ************************************ 00:08:10.854 18:18:59 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.854 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:10.854 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.854 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.854 ************************************ 00:08:10.854 START TEST bdev_json_nonenclosed 00:08:10.854 ************************************ 00:08:10.854 18:18:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:10.854 [2024-10-08 18:18:59.633215] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:10.854 [2024-10-08 18:18:59.633332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75364 ] 00:08:11.164 [2024-10-08 18:18:59.761415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.164 [2024-10-08 18:18:59.783297] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.164 [2024-10-08 18:18:59.826896] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.164 [2024-10-08 18:18:59.826990] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:11.164 [2024-10-08 18:18:59.827012] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.164 [2024-10-08 18:18:59.827022] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.164 00:08:11.164 real 0m0.351s 00:08:11.164 user 0m0.146s 00:08:11.164 sys 0m0.101s 00:08:11.164 18:18:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.164 18:18:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 ************************************ 00:08:11.164 END TEST bdev_json_nonenclosed 00:08:11.164 ************************************ 00:08:11.164 18:18:59 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.164 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:11.164 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.164 18:18:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.164 ************************************ 00:08:11.164 START TEST bdev_json_nonarray 00:08:11.164 ************************************ 00:08:11.164 18:18:59 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:11.421 [2024-10-08 18:19:00.018585] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:11.421 [2024-10-08 18:19:00.018691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75389 ] 00:08:11.421 [2024-10-08 18:19:00.147317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.421 [2024-10-08 18:19:00.167457] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.421 [2024-10-08 18:19:00.217150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.421 [2024-10-08 18:19:00.217313] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:11.421 [2024-10-08 18:19:00.217348] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:11.421 [2024-10-08 18:19:00.217369] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:11.679 00:08:11.679 real 0m0.351s 00:08:11.679 user 0m0.141s 00:08:11.679 sys 0m0.107s 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.679 ************************************ 00:08:11.679 END TEST bdev_json_nonarray 00:08:11.679 ************************************ 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 18:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:11.679 18:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:11.679 18:19:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:11.679 18:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:11.679 18:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.679 18:19:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 ************************************ 00:08:11.679 START TEST bdev_gpt_uuid 00:08:11.679 ************************************ 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75409 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75409 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75409 ']' 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:11.679 18:19:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:11.679 [2024-10-08 18:19:00.435725] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:11.679 [2024-10-08 18:19:00.435869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:08:11.938 [2024-10-08 18:19:00.566271] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.938 [2024-10-08 18:19:00.582191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.938 [2024-10-08 18:19:00.630116] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.504 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.504 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:12.504 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:12.504 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.504 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.764 Some configs were skipped because the RPC state that can call them passed over. 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:12.764 { 00:08:12.764 "name": "Nvme1n1p1", 00:08:12.764 "aliases": [ 00:08:12.764 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:12.764 ], 00:08:12.764 "product_name": "GPT Disk", 00:08:12.764 "block_size": 4096, 00:08:12.764 "num_blocks": 655104, 00:08:12.764 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:12.764 "assigned_rate_limits": { 00:08:12.764 "rw_ios_per_sec": 0, 00:08:12.764 "rw_mbytes_per_sec": 0, 00:08:12.764 "r_mbytes_per_sec": 0, 00:08:12.764 "w_mbytes_per_sec": 0 00:08:12.764 }, 00:08:12.764 "claimed": false, 00:08:12.764 "zoned": false, 00:08:12.764 "supported_io_types": { 00:08:12.764 "read": true, 00:08:12.764 "write": true, 00:08:12.764 "unmap": true, 00:08:12.764 "flush": true, 00:08:12.764 "reset": true, 00:08:12.764 "nvme_admin": false, 00:08:12.764 "nvme_io": false, 00:08:12.764 "nvme_io_md": false, 00:08:12.764 "write_zeroes": true, 00:08:12.764 "zcopy": false, 00:08:12.764 "get_zone_info": false, 00:08:12.764 "zone_management": false, 00:08:12.764 "zone_append": false, 00:08:12.764 "compare": true, 00:08:12.764 "compare_and_write": false, 00:08:12.764 "abort": true, 00:08:12.764 "seek_hole": false, 00:08:12.764 "seek_data": false, 00:08:12.764 "copy": true, 00:08:12.764 "nvme_iov_md": false 00:08:12.764 }, 00:08:12.764 "driver_specific": { 00:08:12.764 "gpt": { 00:08:12.764 "base_bdev": "Nvme1n1", 00:08:12.764 "offset_blocks": 256, 00:08:12.764 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:12.764 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:12.764 "partition_name": "SPDK_TEST_first" 00:08:12.764 } 00:08:12.764 } 00:08:12.764 } 00:08:12.764 ]' 00:08:12.764 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:13.030 { 00:08:13.030 "name": "Nvme1n1p2", 00:08:13.030 "aliases": [ 00:08:13.030 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:13.030 ], 00:08:13.030 "product_name": "GPT Disk", 00:08:13.030 "block_size": 4096, 00:08:13.030 "num_blocks": 655103, 00:08:13.030 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.030 "assigned_rate_limits": { 00:08:13.030 "rw_ios_per_sec": 0, 00:08:13.030 "rw_mbytes_per_sec": 0, 00:08:13.030 "r_mbytes_per_sec": 0, 00:08:13.030 "w_mbytes_per_sec": 0 00:08:13.030 }, 00:08:13.030 "claimed": false, 00:08:13.030 "zoned": false, 00:08:13.030 "supported_io_types": { 00:08:13.030 "read": true, 00:08:13.030 "write": true, 00:08:13.030 "unmap": true, 00:08:13.030 "flush": true, 00:08:13.030 "reset": true, 00:08:13.030 "nvme_admin": false, 00:08:13.030 "nvme_io": false, 00:08:13.030 "nvme_io_md": false, 00:08:13.030 "write_zeroes": true, 00:08:13.030 "zcopy": false, 00:08:13.030 "get_zone_info": false, 00:08:13.030 "zone_management": false, 00:08:13.030 "zone_append": false, 00:08:13.030 "compare": true, 00:08:13.030 "compare_and_write": false, 00:08:13.030 "abort": true, 00:08:13.030 "seek_hole": false, 00:08:13.030 "seek_data": false, 00:08:13.030 "copy": true, 00:08:13.030 "nvme_iov_md": false 00:08:13.030 }, 00:08:13.030 "driver_specific": { 00:08:13.030 "gpt": { 00:08:13.030 "base_bdev": "Nvme1n1", 00:08:13.030 "offset_blocks": 655360, 00:08:13.030 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:13.030 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:13.030 "partition_name": "SPDK_TEST_second" 00:08:13.030 } 00:08:13.030 } 00:08:13.030 } 00:08:13.030 ]' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75409 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75409 ']' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75409 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75409 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75409' 00:08:13.030 killing process with pid 75409 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75409 00:08:13.030 18:19:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75409 00:08:13.596 00:08:13.596 real 0m1.807s 00:08:13.597 user 0m1.916s 00:08:13.597 sys 0m0.384s 00:08:13.597 18:19:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.597 18:19:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:13.597 ************************************ 00:08:13.597 END TEST bdev_gpt_uuid 00:08:13.597 ************************************ 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:13.597 18:19:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:13.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:13.855 Waiting for block devices as requested 00:08:13.855 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:13.855 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.114 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:14.114 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:19.384 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:19.384 18:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:19.384 18:19:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:19.384 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:19.384 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:19.384 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:19.384 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:19.384 18:19:08 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:19.384 ************************************ 00:08:19.384 END TEST blockdev_nvme_gpt 00:08:19.384 ************************************ 00:08:19.384 00:08:19.384 real 0m49.293s 00:08:19.384 user 1m4.244s 00:08:19.384 sys 0m7.673s 00:08:19.384 18:19:08 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.384 18:19:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.384 18:19:08 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:19.384 18:19:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.384 18:19:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.384 18:19:08 -- common/autotest_common.sh@10 -- # set +x 00:08:19.384 ************************************ 00:08:19.384 START TEST nvme 00:08:19.384 ************************************ 00:08:19.384 18:19:08 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:19.643 * Looking for test storage... 00:08:19.643 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.644 18:19:08 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.644 18:19:08 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.644 18:19:08 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.644 18:19:08 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.644 18:19:08 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.644 18:19:08 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:19.644 18:19:08 nvme -- scripts/common.sh@345 -- # : 1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.644 18:19:08 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.644 18:19:08 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@353 -- # local d=1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.644 18:19:08 nvme -- scripts/common.sh@355 -- # echo 1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.644 18:19:08 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@353 -- # local d=2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.644 18:19:08 nvme -- scripts/common.sh@355 -- # echo 2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.644 18:19:08 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.644 18:19:08 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.644 18:19:08 nvme -- scripts/common.sh@368 -- # return 0 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.644 --rc genhtml_branch_coverage=1 00:08:19.644 --rc genhtml_function_coverage=1 00:08:19.644 --rc genhtml_legend=1 00:08:19.644 --rc geninfo_all_blocks=1 00:08:19.644 --rc geninfo_unexecuted_blocks=1 00:08:19.644 00:08:19.644 ' 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.644 --rc genhtml_branch_coverage=1 00:08:19.644 --rc genhtml_function_coverage=1 00:08:19.644 --rc genhtml_legend=1 00:08:19.644 --rc geninfo_all_blocks=1 00:08:19.644 --rc geninfo_unexecuted_blocks=1 00:08:19.644 00:08:19.644 ' 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.644 --rc genhtml_branch_coverage=1 00:08:19.644 --rc genhtml_function_coverage=1 00:08:19.644 --rc genhtml_legend=1 00:08:19.644 --rc geninfo_all_blocks=1 00:08:19.644 --rc geninfo_unexecuted_blocks=1 00:08:19.644 00:08:19.644 ' 00:08:19.644 18:19:08 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:19.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.644 --rc genhtml_branch_coverage=1 00:08:19.644 --rc genhtml_function_coverage=1 00:08:19.644 --rc genhtml_legend=1 00:08:19.644 --rc geninfo_all_blocks=1 00:08:19.644 --rc geninfo_unexecuted_blocks=1 00:08:19.644 00:08:19.644 ' 00:08:19.644 18:19:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:19.902 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:20.469 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.469 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.469 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.469 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:20.469 18:19:09 nvme -- nvme/nvme.sh@79 -- # uname 00:08:20.469 18:19:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:20.469 18:19:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:20.469 18:19:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:20.469 18:19:09 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:20.469 18:19:09 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:20.469 18:19:09 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:20.469 18:19:09 nvme -- common/autotest_common.sh@1071 -- # stubpid=76035 00:08:20.729 Waiting for stub to ready for secondary processes... 00:08:20.729 18:19:09 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:20.729 18:19:09 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:20.729 18:19:09 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76035 ]] 00:08:20.729 18:19:09 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:20.729 18:19:09 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:20.729 [2024-10-08 18:19:09.349211] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:08:20.729 [2024-10-08 18:19:09.349326] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:21.301 [2024-10-08 18:19:10.081350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.301 [2024-10-08 18:19:10.100908] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.301 [2024-10-08 18:19:10.132209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.301 [2024-10-08 18:19:10.132494] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.301 [2024-10-08 18:19:10.132552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.301 [2024-10-08 18:19:10.146510] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:21.301 [2024-10-08 18:19:10.146577] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:21.564 [2024-10-08 18:19:10.161931] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:21.564 [2024-10-08 18:19:10.162355] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:21.564 [2024-10-08 18:19:10.164509] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:21.564 [2024-10-08 18:19:10.164962] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:21.564 [2024-10-08 18:19:10.165090] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:21.564 [2024-10-08 18:19:10.167686] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:21.564 [2024-10-08 18:19:10.168103] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:21.564 [2024-10-08 18:19:10.168280] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:21.564 [2024-10-08 18:19:10.170843] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:21.564 [2024-10-08 18:19:10.171141] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:21.564 [2024-10-08 18:19:10.171194] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:21.564 [2024-10-08 18:19:10.171245] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:21.564 [2024-10-08 18:19:10.171293] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:21.564 done. 00:08:21.564 18:19:10 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:21.564 18:19:10 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:21.564 18:19:10 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:21.564 18:19:10 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:21.564 18:19:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.564 18:19:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.564 ************************************ 00:08:21.564 START TEST nvme_reset 00:08:21.564 ************************************ 00:08:21.564 18:19:10 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:21.827 Initializing NVMe Controllers 00:08:21.827 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:21.827 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:21.827 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:21.827 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:21.827 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:21.827 00:08:21.827 real 0m0.206s 00:08:21.827 user 0m0.051s 00:08:21.827 sys 0m0.113s 00:08:21.827 18:19:10 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.827 ************************************ 00:08:21.827 END TEST nvme_reset 00:08:21.827 ************************************ 00:08:21.827 18:19:10 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 18:19:10 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:21.827 18:19:10 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.827 18:19:10 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.827 18:19:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 ************************************ 00:08:21.827 START TEST nvme_identify 00:08:21.827 ************************************ 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:21.827 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:21.827 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:21.827 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:21.827 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:21.827 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:22.092 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:22.092 18:19:10 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:22.092 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:22.092 [2024-10-08 18:19:10.855801] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 76057 terminated unexpected 00:08:22.092 ===================================================== 00:08:22.092 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.092 ===================================================== 00:08:22.092 Controller Capabilities/Features 00:08:22.092 ================================ 00:08:22.092 Vendor ID: 1b36 00:08:22.092 Subsystem Vendor ID: 1af4 00:08:22.092 Serial Number: 12340 00:08:22.092 Model Number: QEMU NVMe Ctrl 00:08:22.092 Firmware Version: 8.0.0 00:08:22.092 Recommended Arb Burst: 6 00:08:22.092 IEEE OUI Identifier: 00 54 52 00:08:22.092 Multi-path I/O 00:08:22.092 May have multiple subsystem ports: No 00:08:22.092 May have multiple controllers: No 00:08:22.092 Associated with SR-IOV VF: No 00:08:22.092 Max Data Transfer Size: 524288 00:08:22.092 Max Number of Namespaces: 256 00:08:22.092 Max Number of I/O Queues: 64 00:08:22.092 NVMe Specification Version (VS): 1.4 00:08:22.092 NVMe Specification Version (Identify): 1.4 00:08:22.092 Maximum Queue Entries: 2048 00:08:22.092 Contiguous Queues Required: Yes 00:08:22.092 Arbitration Mechanisms Supported 00:08:22.092 Weighted Round Robin: Not Supported 00:08:22.092 Vendor Specific: Not Supported 00:08:22.092 Reset Timeout: 7500 ms 00:08:22.092 Doorbell Stride: 4 bytes 00:08:22.092 NVM Subsystem Reset: Not Supported 00:08:22.092 Command Sets Supported 00:08:22.092 NVM Command Set: Supported 00:08:22.092 Boot Partition: Not Supported 00:08:22.092 Memory Page Size Minimum: 4096 bytes 00:08:22.092 Memory Page Size Maximum: 65536 bytes 00:08:22.092 Persistent Memory Region: Not Supported 00:08:22.092 Optional Asynchronous Events Supported 00:08:22.092 Namespace Attribute Notices: Supported 00:08:22.092 Firmware Activation Notices: Not Supported 00:08:22.092 ANA Change Notices: Not Supported 00:08:22.092 PLE Aggregate Log Change Notices: Not Supported 00:08:22.092 LBA Status Info Alert Notices: Not Supported 00:08:22.092 EGE Aggregate Log Change Notices: Not Supported 00:08:22.092 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.092 Zone Descriptor Change Notices: Not Supported 00:08:22.092 Discovery Log Change Notices: Not Supported 00:08:22.092 Controller Attributes 00:08:22.092 128-bit Host Identifier: Not Supported 00:08:22.092 Non-Operational Permissive Mode: Not Supported 00:08:22.092 NVM Sets: Not Supported 00:08:22.092 Read Recovery Levels: Not Supported 00:08:22.092 Endurance Groups: Not Supported 00:08:22.092 Predictable Latency Mode: Not Supported 00:08:22.092 Traffic Based Keep ALive: Not Supported 00:08:22.092 Namespace Granularity: Not Supported 00:08:22.092 SQ Associations: Not Supported 00:08:22.092 UUID List: Not Supported 00:08:22.092 Multi-Domain Subsystem: Not Supported 00:08:22.092 Fixed Capacity Management: Not Supported 00:08:22.092 Variable Capacity Management: Not Supported 00:08:22.092 Delete Endurance Group: Not Supported 00:08:22.092 Delete NVM Set: Not Supported 00:08:22.092 Extended LBA Formats Supported: Supported 00:08:22.092 Flexible Data Placement Supported: Not Supported 00:08:22.092 00:08:22.092 Controller Memory Buffer Support 00:08:22.092 ================================ 00:08:22.092 Supported: No 00:08:22.092 00:08:22.092 Persistent Memory Region Support 00:08:22.092 ================================ 00:08:22.092 Supported: No 00:08:22.092 00:08:22.092 Admin Command Set Attributes 00:08:22.092 ============================ 00:08:22.092 Security Send/Receive: Not Supported 00:08:22.092 Format NVM: Supported 00:08:22.092 Firmware Activate/Download: Not Supported 00:08:22.092 Namespace Management: Supported 00:08:22.092 Device Self-Test: Not Supported 00:08:22.093 Directives: Supported 00:08:22.093 NVMe-MI: Not Supported 00:08:22.093 Virtualization Management: Not Supported 00:08:22.093 Doorbell Buffer Config: Supported 00:08:22.093 Get LBA Status Capability: Not Supported 00:08:22.093 Command & Feature Lockdown Capability: Not Supported 00:08:22.093 Abort Command Limit: 4 00:08:22.093 Async Event Request Limit: 4 00:08:22.093 Number of Firmware Slots: N/A 00:08:22.093 Firmware Slot 1 Read-Only: N/A 00:08:22.093 Firmware Activation Without Reset: N/A 00:08:22.093 Multiple Update Detection Support: N/A 00:08:22.093 Firmware Update Granularity: No Information Provided 00:08:22.093 Per-Namespace SMART Log: Yes 00:08:22.093 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.093 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:22.093 Command Effects Log Page: Supported 00:08:22.093 Get Log Page Extended Data: Supported 00:08:22.093 Telemetry Log Pages: Not Supported 00:08:22.093 Persistent Event Log Pages: Not Supported 00:08:22.093 Supported Log Pages Log Page: May Support 00:08:22.093 Commands Supported & Effects Log Page: Not Supported 00:08:22.093 Feature Identifiers & Effects Log Page:May Support 00:08:22.093 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.093 Data Area 4 for Telemetry Log: Not Supported 00:08:22.093 Error Log Page Entries Supported: 1 00:08:22.093 Keep Alive: Not Supported 00:08:22.093 00:08:22.093 NVM Command Set Attributes 00:08:22.093 ========================== 00:08:22.093 Submission Queue Entry Size 00:08:22.093 Max: 64 00:08:22.093 Min: 64 00:08:22.093 Completion Queue Entry Size 00:08:22.093 Max: 16 00:08:22.093 Min: 16 00:08:22.093 Number of Namespaces: 256 00:08:22.093 Compare Command: Supported 00:08:22.093 Write Uncorrectable Command: Not Supported 00:08:22.093 Dataset Management Command: Supported 00:08:22.093 Write Zeroes Command: Supported 00:08:22.093 Set Features Save Field: Supported 00:08:22.093 Reservations: Not Supported 00:08:22.093 Timestamp: Supported 00:08:22.093 Copy: Supported 00:08:22.093 Volatile Write Cache: Present 00:08:22.093 Atomic Write Unit (Normal): 1 00:08:22.093 Atomic Write Unit (PFail): 1 00:08:22.093 Atomic Compare & Write Unit: 1 00:08:22.093 Fused Compare & Write: Not Supported 00:08:22.093 Scatter-Gather List 00:08:22.093 SGL Command Set: Supported 00:08:22.093 SGL Keyed: Not Supported 00:08:22.093 SGL Bit Bucket Descriptor: Not Supported 00:08:22.093 SGL Metadata Pointer: Not Supported 00:08:22.093 Oversized SGL: Not Supported 00:08:22.093 SGL Metadata Address: Not Supported 00:08:22.093 SGL Offset: Not Supported 00:08:22.093 Transport SGL Data Block: Not Supported 00:08:22.093 Replay Protected Memory Block: Not Supported 00:08:22.093 00:08:22.093 Firmware Slot Information 00:08:22.093 ========================= 00:08:22.093 Active slot: 1 00:08:22.093 Slot 1 Firmware Revision: 1.0 00:08:22.093 00:08:22.093 00:08:22.093 Commands Supported and Effects 00:08:22.093 ============================== 00:08:22.093 Admin Commands 00:08:22.093 -------------- 00:08:22.093 Delete I/O Submission Queue (00h): Supported 00:08:22.093 Create I/O Submission Queue (01h): Supported 00:08:22.093 Get Log Page (02h): Supported 00:08:22.093 Delete I/O Completion Queue (04h): Supported 00:08:22.093 Create I/O Completion Queue (05h): Supported 00:08:22.093 Identify (06h): Supported 00:08:22.093 Abort (08h): Supported 00:08:22.093 Set Features (09h): Supported 00:08:22.093 Get Features (0Ah): Supported 00:08:22.093 Asynchronous Event Request (0Ch): Supported 00:08:22.093 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.093 Directive Send (19h): Supported 00:08:22.093 Directive Receive (1Ah): Supported 00:08:22.093 Virtualization Management (1Ch): Supported 00:08:22.093 Doorbell Buffer Config (7Ch): Supported 00:08:22.093 Format NVM (80h): Supported LBA-Change 00:08:22.093 I/O Commands 00:08:22.093 ------------ 00:08:22.093 Flush (00h): Supported LBA-Change 00:08:22.093 Write (01h): Supported LBA-Change 00:08:22.093 Read (02h): Supported 00:08:22.093 Compare (05h): Supported 00:08:22.093 Write Zeroes (08h): Supported LBA-Change 00:08:22.093 Dataset Management (09h): Supported LBA-Change 00:08:22.093 Unknown (0Ch): Supported 00:08:22.093 Unknown (12h): Supported 00:08:22.093 Copy (19h): Supported LBA-Change 00:08:22.093 Unknown (1Dh): Supported LBA-Change 00:08:22.093 00:08:22.093 Error Log 00:08:22.093 ========= 00:08:22.093 00:08:22.093 Arbitration 00:08:22.093 =========== 00:08:22.093 Arbitration Burst: no limit 00:08:22.093 00:08:22.093 Power Management 00:08:22.093 ================ 00:08:22.093 Number of Power States: 1 00:08:22.093 Current Power State: Power State #0 00:08:22.093 Power State #0: 00:08:22.093 Max Power: 25.00 W 00:08:22.093 Non-Operational State: Operational 00:08:22.093 Entry Latency: 16 microseconds 00:08:22.093 Exit Latency: 4 microseconds 00:08:22.093 Relative Read Throughput: 0 00:08:22.093 Relative Read Latency: 0 00:08:22.093 Relative Write Throughput: 0 00:08:22.093 Relative Write Latency: 0 00:08:22.093 Idle Power[2024-10-08 18:19:10.857988] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 76057 terminated unexpected 00:08:22.093 : Not Reported 00:08:22.093 Active Power: Not Reported 00:08:22.093 Non-Operational Permissive Mode: Not Supported 00:08:22.093 00:08:22.093 Health Information 00:08:22.093 ================== 00:08:22.093 Critical Warnings: 00:08:22.093 Available Spare Space: OK 00:08:22.093 Temperature: OK 00:08:22.093 Device Reliability: OK 00:08:22.093 Read Only: No 00:08:22.093 Volatile Memory Backup: OK 00:08:22.093 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.093 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.093 Available Spare: 0% 00:08:22.093 Available Spare Threshold: 0% 00:08:22.093 Life Percentage Used: 0% 00:08:22.093 Data Units Read: 691 00:08:22.093 Data Units Written: 619 00:08:22.093 Host Read Commands: 38476 00:08:22.093 Host Write Commands: 38262 00:08:22.093 Controller Busy Time: 0 minutes 00:08:22.093 Power Cycles: 0 00:08:22.093 Power On Hours: 0 hours 00:08:22.093 Unsafe Shutdowns: 0 00:08:22.093 Unrecoverable Media Errors: 0 00:08:22.093 Lifetime Error Log Entries: 0 00:08:22.093 Warning Temperature Time: 0 minutes 00:08:22.093 Critical Temperature Time: 0 minutes 00:08:22.093 00:08:22.093 Number of Queues 00:08:22.093 ================ 00:08:22.093 Number of I/O Submission Queues: 64 00:08:22.093 Number of I/O Completion Queues: 64 00:08:22.093 00:08:22.093 ZNS Specific Controller Data 00:08:22.093 ============================ 00:08:22.093 Zone Append Size Limit: 0 00:08:22.093 00:08:22.093 00:08:22.093 Active Namespaces 00:08:22.093 ================= 00:08:22.093 Namespace ID:1 00:08:22.093 Error Recovery Timeout: Unlimited 00:08:22.093 Command Set Identifier: NVM (00h) 00:08:22.093 Deallocate: Supported 00:08:22.093 Deallocated/Unwritten Error: Supported 00:08:22.093 Deallocated Read Value: All 0x00 00:08:22.093 Deallocate in Write Zeroes: Not Supported 00:08:22.093 Deallocated Guard Field: 0xFFFF 00:08:22.093 Flush: Supported 00:08:22.093 Reservation: Not Supported 00:08:22.093 Metadata Transferred as: Separate Metadata Buffer 00:08:22.093 Namespace Sharing Capabilities: Private 00:08:22.094 Size (in LBAs): 1548666 (5GiB) 00:08:22.094 Capacity (in LBAs): 1548666 (5GiB) 00:08:22.094 Utilization (in LBAs): 1548666 (5GiB) 00:08:22.094 Thin Provisioning: Not Supported 00:08:22.094 Per-NS Atomic Units: No 00:08:22.094 Maximum Single Source Range Length: 128 00:08:22.094 Maximum Copy Length: 128 00:08:22.094 Maximum Source Range Count: 128 00:08:22.094 NGUID/EUI64 Never Reused: No 00:08:22.094 Namespace Write Protected: No 00:08:22.094 Number of LBA Formats: 8 00:08:22.094 Current LBA Format: LBA Format #07 00:08:22.094 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.094 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.094 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.094 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.094 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.094 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.094 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.094 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.094 00:08:22.094 NVM Specific Namespace Data 00:08:22.094 =========================== 00:08:22.094 Logical Block Storage Tag Mask: 0 00:08:22.094 Protection Information Capabilities: 00:08:22.094 16b Guard Protection Information Storage Tag Support: No 00:08:22.094 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.094 Storage Tag Check Read Support: No 00:08:22.094 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.094 ===================================================== 00:08:22.094 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.094 ===================================================== 00:08:22.094 Controller Capabilities/Features 00:08:22.094 ================================ 00:08:22.094 Vendor ID: 1b36 00:08:22.094 Subsystem Vendor ID: 1af4 00:08:22.094 Serial Number: 12341 00:08:22.094 Model Number: QEMU NVMe Ctrl 00:08:22.094 Firmware Version: 8.0.0 00:08:22.094 Recommended Arb Burst: 6 00:08:22.094 IEEE OUI Identifier: 00 54 52 00:08:22.094 Multi-path I/O 00:08:22.094 May have multiple subsystem ports: No 00:08:22.094 May have multiple controllers: No 00:08:22.094 Associated with SR-IOV VF: No 00:08:22.094 Max Data Transfer Size: 524288 00:08:22.094 Max Number of Namespaces: 256 00:08:22.094 Max Number of I/O Queues: 64 00:08:22.094 NVMe Specification Version (VS): 1.4 00:08:22.094 NVMe Specification Version (Identify): 1.4 00:08:22.094 Maximum Queue Entries: 2048 00:08:22.094 Contiguous Queues Required: Yes 00:08:22.094 Arbitration Mechanisms Supported 00:08:22.094 Weighted Round Robin: Not Supported 00:08:22.094 Vendor Specific: Not Supported 00:08:22.094 Reset Timeout: 7500 ms 00:08:22.094 Doorbell Stride: 4 bytes 00:08:22.094 NVM Subsystem Reset: Not Supported 00:08:22.094 Command Sets Supported 00:08:22.094 NVM Command Set: Supported 00:08:22.094 Boot Partition: Not Supported 00:08:22.094 Memory Page Size Minimum: 4096 bytes 00:08:22.094 Memory Page Size Maximum: 65536 bytes 00:08:22.094 Persistent Memory Region: Not Supported 00:08:22.094 Optional Asynchronous Events Supported 00:08:22.094 Namespace Attribute Notices: Supported 00:08:22.094 Firmware Activation Notices: Not Supported 00:08:22.094 ANA Change Notices: Not Supported 00:08:22.094 PLE Aggregate Log Change Notices: Not Supported 00:08:22.094 LBA Status Info Alert Notices: Not Supported 00:08:22.094 EGE Aggregate Log Change Notices: Not Supported 00:08:22.094 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.094 Zone Descriptor Change Notices: Not Supported 00:08:22.094 Discovery Log Change Notices: Not Supported 00:08:22.094 Controller Attributes 00:08:22.094 128-bit Host Identifier: Not Supported 00:08:22.094 Non-Operational Permissive Mode: Not Supported 00:08:22.094 NVM Sets: Not Supported 00:08:22.094 Read Recovery Levels: Not Supported 00:08:22.094 Endurance Groups: Not Supported 00:08:22.094 Predictable Latency Mode: Not Supported 00:08:22.094 Traffic Based Keep ALive: Not Supported 00:08:22.094 Namespace Granularity: Not Supported 00:08:22.094 SQ Associations: Not Supported 00:08:22.094 UUID List: Not Supported 00:08:22.094 Multi-Domain Subsystem: Not Supported 00:08:22.094 Fixed Capacity Management: Not Supported 00:08:22.094 Variable Capacity Management: Not Supported 00:08:22.094 Delete Endurance Group: Not Supported 00:08:22.094 Delete NVM Set: Not Supported 00:08:22.094 Extended LBA Formats Supported: Supported 00:08:22.094 Flexible Data Placement Supported: Not Supported 00:08:22.094 00:08:22.094 Controller Memory Buffer Support 00:08:22.094 ================================ 00:08:22.094 Supported: No 00:08:22.094 00:08:22.094 Persistent Memory Region Support 00:08:22.094 ================================ 00:08:22.094 Supported: No 00:08:22.094 00:08:22.094 Admin Command Set Attributes 00:08:22.094 ============================ 00:08:22.094 Security Send/Receive: Not Supported 00:08:22.094 Format NVM: Supported 00:08:22.094 Firmware Activate/Download: Not Supported 00:08:22.094 Namespace Management: Supported 00:08:22.094 Device Self-Test: Not Supported 00:08:22.094 Directives: Supported 00:08:22.094 NVMe-MI: Not Supported 00:08:22.094 Virtualization Management: Not Supported 00:08:22.094 Doorbell Buffer Config: Supported 00:08:22.094 Get LBA Status Capability: Not Supported 00:08:22.094 Command & Feature Lockdown Capability: Not Supported 00:08:22.094 Abort Command Limit: 4 00:08:22.094 Async Event Request Limit: 4 00:08:22.094 Number of Firmware Slots: N/A 00:08:22.094 Firmware Slot 1 Read-Only: N/A 00:08:22.094 Firmware Activation Without Reset: N/A 00:08:22.094 Multiple Update Detection Support: N/A 00:08:22.094 Firmware Update Granularity: No Information Provided 00:08:22.094 Per-Namespace SMART Log: Yes 00:08:22.094 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.094 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:22.094 Command Effects Log Page: Supported 00:08:22.094 Get Log Page Extended Data: Supported 00:08:22.094 Telemetry Log Pages: Not Supported 00:08:22.094 Persistent Event Log Pages: Not Supported 00:08:22.094 Supported Log Pages Log Page: May Support 00:08:22.094 Commands Supported & Effects Log Page: Not Supported 00:08:22.094 Feature Identifiers & Effects Log Page:May Support 00:08:22.094 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.094 Data Area 4 for Telemetry Log: Not Supported 00:08:22.094 Error Log Page Entries Supported: 1 00:08:22.094 Keep Alive: Not Supported 00:08:22.094 00:08:22.094 NVM Command Set Attributes 00:08:22.094 ========================== 00:08:22.094 Submission Queue Entry Size 00:08:22.094 Max: 64 00:08:22.094 Min: 64 00:08:22.094 Completion Queue Entry Size 00:08:22.094 Max: 16 00:08:22.094 Min: 16 00:08:22.094 Number of Namespaces: 256 00:08:22.094 Compare Command: Supported 00:08:22.094 Write Uncorrectable Command: Not Supported 00:08:22.094 Dataset Management Command: Supported 00:08:22.094 Write Zeroes Command: Supported 00:08:22.094 Set Features Save Field: Supported 00:08:22.094 Reservations: Not Supported 00:08:22.094 Timestamp: Supported 00:08:22.094 Copy: Supported 00:08:22.094 Volatile Write Cache: Present 00:08:22.094 Atomic Write Unit (Normal): 1 00:08:22.094 Atomic Write Unit (PFail): 1 00:08:22.094 Atomic Compare & Write Unit: 1 00:08:22.094 Fused Compare & Write: Not Supported 00:08:22.094 Scatter-Gather List 00:08:22.094 SGL Command Set: Supported 00:08:22.095 SGL Keyed: Not Supported 00:08:22.095 SGL Bit Bucket Descriptor: Not Supported 00:08:22.095 SGL Metadata Pointer: Not Supported 00:08:22.095 Oversized SGL: Not Supported 00:08:22.095 SGL Metadata Address: Not Supported 00:08:22.095 SGL Offset: Not Supported 00:08:22.095 Transport SGL Data Block: Not Supported 00:08:22.095 Replay Protected Memory Block: Not Supported 00:08:22.095 00:08:22.095 Firmware Slot Information 00:08:22.095 ========================= 00:08:22.095 Active slot: 1 00:08:22.095 Slot 1 Firmware Revision: 1.0 00:08:22.095 00:08:22.095 00:08:22.095 Commands Supported and Effects 00:08:22.095 ============================== 00:08:22.095 Admin Commands 00:08:22.095 -------------- 00:08:22.095 Delete I/O Submission Queue (00h): Supported 00:08:22.095 Create I/O Submission Queue (01h): Supported 00:08:22.095 Get Log Page (02h): Supported 00:08:22.095 Delete I/O Completion Queue (04h): Supported 00:08:22.095 Create I/O Completion Queue (05h): Supported 00:08:22.095 Identify (06h): Supported 00:08:22.095 Abort (08h): Supported 00:08:22.095 Set Features (09h): Supported 00:08:22.095 Get Features (0Ah): Supported 00:08:22.095 Asynchronous Event Request (0Ch): Supported 00:08:22.095 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.095 Directive Send (19h): Supported 00:08:22.095 Directive Receive (1Ah): Supported 00:08:22.095 Virtualization Management (1Ch): Supported 00:08:22.095 Doorbell Buffer Config (7Ch): Supported 00:08:22.095 Format NVM (80h): Supported LBA-Change 00:08:22.095 I/O Commands 00:08:22.095 ------------ 00:08:22.095 Flush (00h): Supported LBA-Change 00:08:22.095 Write (01h): Supported LBA-Change 00:08:22.095 Read (02h): Supported 00:08:22.095 Compare (05h): Supported 00:08:22.095 Write Zeroes (08h): Supported LBA-Change 00:08:22.095 Dataset Management (09h): Supported LBA-Change 00:08:22.095 Unknown (0Ch): Supported 00:08:22.095 Unknown (12h): Supported 00:08:22.095 Copy (19h): Supported LBA-Change 00:08:22.095 Unknown (1Dh): Supported LBA-Change 00:08:22.095 00:08:22.095 Error Log 00:08:22.095 ========= 00:08:22.095 00:08:22.095 Arbitration 00:08:22.095 =========== 00:08:22.095 Arbitration Burst: no limit 00:08:22.095 00:08:22.095 Power Management 00:08:22.095 ================ 00:08:22.095 Number of Power States: 1 00:08:22.095 Current Power State: Power State #0 00:08:22.095 Power State #0: 00:08:22.095 Max Power: 25.00 W 00:08:22.095 Non-Operational State: Operational 00:08:22.095 Entry Latency: 16 microseconds 00:08:22.095 Exit Latency: 4 microseconds 00:08:22.095 Relative Read Throughput: 0 00:08:22.095 Relative Read Latency: 0 00:08:22.095 Relative Write Throughput: 0 00:08:22.095 Relative Write Latency: 0 00:08:22.095 Idle Power: Not Reported 00:08:22.095 Active Power: Not Reported 00:08:22.095 Non-Operational Permissive Mode: Not Supported 00:08:22.095 00:08:22.095 Health Information 00:08:22.095 ================== 00:08:22.095 Critical Warnings: 00:08:22.095 Available Spare Space: OK 00:08:22.095 Temperature: [2024-10-08 18:19:10.860365] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 76057 terminated unexpected 00:08:22.095 OK 00:08:22.095 Device Reliability: OK 00:08:22.095 Read Only: No 00:08:22.095 Volatile Memory Backup: OK 00:08:22.095 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.095 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.095 Available Spare: 0% 00:08:22.095 Available Spare Threshold: 0% 00:08:22.095 Life Percentage Used: 0% 00:08:22.095 Data Units Read: 1050 00:08:22.095 Data Units Written: 915 00:08:22.095 Host Read Commands: 57794 00:08:22.095 Host Write Commands: 56535 00:08:22.095 Controller Busy Time: 0 minutes 00:08:22.095 Power Cycles: 0 00:08:22.095 Power On Hours: 0 hours 00:08:22.095 Unsafe Shutdowns: 0 00:08:22.095 Unrecoverable Media Errors: 0 00:08:22.095 Lifetime Error Log Entries: 0 00:08:22.095 Warning Temperature Time: 0 minutes 00:08:22.095 Critical Temperature Time: 0 minutes 00:08:22.095 00:08:22.095 Number of Queues 00:08:22.095 ================ 00:08:22.095 Number of I/O Submission Queues: 64 00:08:22.095 Number of I/O Completion Queues: 64 00:08:22.095 00:08:22.095 ZNS Specific Controller Data 00:08:22.095 ============================ 00:08:22.095 Zone Append Size Limit: 0 00:08:22.095 00:08:22.095 00:08:22.095 Active Namespaces 00:08:22.095 ================= 00:08:22.095 Namespace ID:1 00:08:22.095 Error Recovery Timeout: Unlimited 00:08:22.095 Command Set Identifier: NVM (00h) 00:08:22.095 Deallocate: Supported 00:08:22.095 Deallocated/Unwritten Error: Supported 00:08:22.095 Deallocated Read Value: All 0x00 00:08:22.095 Deallocate in Write Zeroes: Not Supported 00:08:22.095 Deallocated Guard Field: 0xFFFF 00:08:22.095 Flush: Supported 00:08:22.095 Reservation: Not Supported 00:08:22.095 Namespace Sharing Capabilities: Private 00:08:22.095 Size (in LBAs): 1310720 (5GiB) 00:08:22.095 Capacity (in LBAs): 1310720 (5GiB) 00:08:22.095 Utilization (in LBAs): 1310720 (5GiB) 00:08:22.095 Thin Provisioning: Not Supported 00:08:22.095 Per-NS Atomic Units: No 00:08:22.095 Maximum Single Source Range Length: 128 00:08:22.095 Maximum Copy Length: 128 00:08:22.095 Maximum Source Range Count: 128 00:08:22.095 NGUID/EUI64 Never Reused: No 00:08:22.095 Namespace Write Protected: No 00:08:22.095 Number of LBA Formats: 8 00:08:22.095 Current LBA Format: LBA Format #04 00:08:22.095 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.095 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.095 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.095 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.095 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.095 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.095 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.095 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.095 00:08:22.095 NVM Specific Namespace Data 00:08:22.095 =========================== 00:08:22.095 Logical Block Storage Tag Mask: 0 00:08:22.095 Protection Information Capabilities: 00:08:22.095 16b Guard Protection Information Storage Tag Support: No 00:08:22.095 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.095 Storage Tag Check Read Support: No 00:08:22.095 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.095 ===================================================== 00:08:22.095 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.095 ===================================================== 00:08:22.095 Controller Capabilities/Features 00:08:22.095 ================================ 00:08:22.095 Vendor ID: 1b36 00:08:22.095 Subsystem Vendor ID: 1af4 00:08:22.095 Serial Number: 12343 00:08:22.096 Model Number: QEMU NVMe Ctrl 00:08:22.096 Firmware Version: 8.0.0 00:08:22.096 Recommended Arb Burst: 6 00:08:22.096 IEEE OUI Identifier: 00 54 52 00:08:22.096 Multi-path I/O 00:08:22.096 May have multiple subsystem ports: No 00:08:22.096 May have multiple controllers: Yes 00:08:22.096 Associated with SR-IOV VF: No 00:08:22.096 Max Data Transfer Size: 524288 00:08:22.096 Max Number of Namespaces: 256 00:08:22.096 Max Number of I/O Queues: 64 00:08:22.096 NVMe Specification Version (VS): 1.4 00:08:22.096 NVMe Specification Version (Identify): 1.4 00:08:22.096 Maximum Queue Entries: 2048 00:08:22.096 Contiguous Queues Required: Yes 00:08:22.096 Arbitration Mechanisms Supported 00:08:22.096 Weighted Round Robin: Not Supported 00:08:22.096 Vendor Specific: Not Supported 00:08:22.096 Reset Timeout: 7500 ms 00:08:22.096 Doorbell Stride: 4 bytes 00:08:22.096 NVM Subsystem Reset: Not Supported 00:08:22.096 Command Sets Supported 00:08:22.096 NVM Command Set: Supported 00:08:22.096 Boot Partition: Not Supported 00:08:22.096 Memory Page Size Minimum: 4096 bytes 00:08:22.096 Memory Page Size Maximum: 65536 bytes 00:08:22.096 Persistent Memory Region: Not Supported 00:08:22.096 Optional Asynchronous Events Supported 00:08:22.096 Namespace Attribute Notices: Supported 00:08:22.096 Firmware Activation Notices: Not Supported 00:08:22.096 ANA Change Notices: Not Supported 00:08:22.096 PLE Aggregate Log Change Notices: Not Supported 00:08:22.096 LBA Status Info Alert Notices: Not Supported 00:08:22.096 EGE Aggregate Log Change Notices: Not Supported 00:08:22.096 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.096 Zone Descriptor Change Notices: Not Supported 00:08:22.096 Discovery Log Change Notices: Not Supported 00:08:22.096 Controller Attributes 00:08:22.096 128-bit Host Identifier: Not Supported 00:08:22.096 Non-Operational Permissive Mode: Not Supported 00:08:22.096 NVM Sets: Not Supported 00:08:22.096 Read Recovery Levels: Not Supported 00:08:22.096 Endurance Groups: Supported 00:08:22.096 Predictable Latency Mode: Not Supported 00:08:22.096 Traffic Based Keep ALive: Not Supported 00:08:22.096 Namespace Granularity: Not Supported 00:08:22.096 SQ Associations: Not Supported 00:08:22.096 UUID List: Not Supported 00:08:22.096 Multi-Domain Subsystem: Not Supported 00:08:22.096 Fixed Capacity Management: Not Supported 00:08:22.096 Variable Capacity Management: Not Supported 00:08:22.096 Delete Endurance Group: Not Supported 00:08:22.096 Delete NVM Set: Not Supported 00:08:22.096 Extended LBA Formats Supported: Supported 00:08:22.096 Flexible Data Placement Supported: Supported 00:08:22.096 00:08:22.096 Controller Memory Buffer Support 00:08:22.096 ================================ 00:08:22.096 Supported: No 00:08:22.096 00:08:22.096 Persistent Memory Region Support 00:08:22.096 ================================ 00:08:22.096 Supported: No 00:08:22.096 00:08:22.096 Admin Command Set Attributes 00:08:22.096 ============================ 00:08:22.096 Security Send/Receive: Not Supported 00:08:22.096 Format NVM: Supported 00:08:22.096 Firmware Activate/Download: Not Supported 00:08:22.096 Namespace Management: Supported 00:08:22.096 Device Self-Test: Not Supported 00:08:22.096 Directives: Supported 00:08:22.096 NVMe-MI: Not Supported 00:08:22.096 Virtualization Management: Not Supported 00:08:22.096 Doorbell Buffer Config: Supported 00:08:22.096 Get LBA Status Capability: Not Supported 00:08:22.096 Command & Feature Lockdown Capability: Not Supported 00:08:22.096 Abort Command Limit: 4 00:08:22.096 Async Event Request Limit: 4 00:08:22.096 Number of Firmware Slots: N/A 00:08:22.096 Firmware Slot 1 Read-Only: N/A 00:08:22.096 Firmware Activation Without Reset: N/A 00:08:22.096 Multiple Update Detection Support: N/A 00:08:22.096 Firmware Update Granularity: No Information Provided 00:08:22.096 Per-Namespace SMART Log: Yes 00:08:22.096 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.096 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:22.096 Command Effects Log Page: Supported 00:08:22.096 Get Log Page Extended Data: Supported 00:08:22.096 Telemetry Log Pages: Not Supported 00:08:22.096 Persistent Event Log Pages: Not Supported 00:08:22.096 Supported Log Pages Log Page: May Support 00:08:22.096 Commands Supported & Effects Log Page: Not Supported 00:08:22.096 Feature Identifiers & Effects Log Page:May Support 00:08:22.096 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.096 Data Area 4 for Telemetry Log: Not Supported 00:08:22.096 Error Log Page Entries Supported: 1 00:08:22.096 Keep Alive: Not Supported 00:08:22.096 00:08:22.096 NVM Command Set Attributes 00:08:22.096 ========================== 00:08:22.096 Submission Queue Entry Size 00:08:22.096 Max: 64 00:08:22.096 Min: 64 00:08:22.096 Completion Queue Entry Size 00:08:22.096 Max: 16 00:08:22.096 Min: 16 00:08:22.096 Number of Namespaces: 256 00:08:22.096 Compare Command: Supported 00:08:22.096 Write Uncorrectable Command: Not Supported 00:08:22.096 Dataset Management Command: Supported 00:08:22.096 Write Zeroes Command: Supported 00:08:22.096 Set Features Save Field: Supported 00:08:22.096 Reservations: Not Supported 00:08:22.096 Timestamp: Supported 00:08:22.096 Copy: Supported 00:08:22.096 Volatile Write Cache: Present 00:08:22.096 Atomic Write Unit (Normal): 1 00:08:22.096 Atomic Write Unit (PFail): 1 00:08:22.096 Atomic Compare & Write Unit: 1 00:08:22.096 Fused Compare & Write: Not Supported 00:08:22.096 Scatter-Gather List 00:08:22.096 SGL Command Set: Supported 00:08:22.096 SGL Keyed: Not Supported 00:08:22.096 SGL Bit Bucket Descriptor: Not Supported 00:08:22.096 SGL Metadata Pointer: Not Supported 00:08:22.096 Oversized SGL: Not Supported 00:08:22.096 SGL Metadata Address: Not Supported 00:08:22.096 SGL Offset: Not Supported 00:08:22.096 Transport SGL Data Block: Not Supported 00:08:22.096 Replay Protected Memory Block: Not Supported 00:08:22.096 00:08:22.096 Firmware Slot Information 00:08:22.096 ========================= 00:08:22.096 Active slot: 1 00:08:22.096 Slot 1 Firmware Revision: 1.0 00:08:22.096 00:08:22.096 00:08:22.096 Commands Supported and Effects 00:08:22.096 ============================== 00:08:22.096 Admin Commands 00:08:22.096 -------------- 00:08:22.096 Delete I/O Submission Queue (00h): Supported 00:08:22.096 Create I/O Submission Queue (01h): Supported 00:08:22.096 Get Log Page (02h): Supported 00:08:22.096 Delete I/O Completion Queue (04h): Supported 00:08:22.096 Create I/O Completion Queue (05h): Supported 00:08:22.096 Identify (06h): Supported 00:08:22.096 Abort (08h): Supported 00:08:22.096 Set Features (09h): Supported 00:08:22.096 Get Features (0Ah): Supported 00:08:22.096 Asynchronous Event Request (0Ch): Supported 00:08:22.096 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.096 Directive Send (19h): Supported 00:08:22.096 Directive Receive (1Ah): Supported 00:08:22.096 Virtualization Management (1Ch): Supported 00:08:22.096 Doorbell Buffer Config (7Ch): Supported 00:08:22.096 Format NVM (80h): Supported LBA-Change 00:08:22.096 I/O Commands 00:08:22.096 ------------ 00:08:22.097 Flush (00h): Supported LBA-Change 00:08:22.097 Write (01h): Supported LBA-Change 00:08:22.097 Read (02h): Supported 00:08:22.097 Compare (05h): Supported 00:08:22.097 Write Zeroes (08h): Supported LBA-Change 00:08:22.097 Dataset Management (09h): Supported LBA-Change 00:08:22.097 Unknown (0Ch): Supported 00:08:22.097 Unknown (12h): Supported 00:08:22.097 Copy (19h): Supported LBA-Change 00:08:22.097 Unknown (1Dh): Supported LBA-Change 00:08:22.097 00:08:22.097 Error Log 00:08:22.097 ========= 00:08:22.097 00:08:22.097 Arbitration 00:08:22.097 =========== 00:08:22.097 Arbitration Burst: no limit 00:08:22.097 00:08:22.097 Power Management 00:08:22.097 ================ 00:08:22.097 Number of Power States: 1 00:08:22.097 Current Power State: Power State #0 00:08:22.097 Power State #0: 00:08:22.097 Max Power: 25.00 W 00:08:22.097 Non-Operational State: Operational 00:08:22.097 Entry Latency: 16 microseconds 00:08:22.097 Exit Latency: 4 microseconds 00:08:22.097 Relative Read Throughput: 0 00:08:22.097 Relative Read Latency: 0 00:08:22.097 Relative Write Throughput: 0 00:08:22.097 Relative Write Latency: 0 00:08:22.097 Idle Power: Not Reported 00:08:22.097 Active Power: Not Reported 00:08:22.097 Non-Operational Permissive Mode: Not Supported 00:08:22.097 00:08:22.097 Health Information 00:08:22.097 ================== 00:08:22.097 Critical Warnings: 00:08:22.097 Available Spare Space: OK 00:08:22.097 Temperature: OK 00:08:22.097 Device Reliability: OK 00:08:22.097 Read Only: No 00:08:22.097 Volatile Memory Backup: OK 00:08:22.097 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.097 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.097 Available Spare: 0% 00:08:22.097 Available Spare Threshold: 0% 00:08:22.097 Life Percentage Used: 0% 00:08:22.097 Data Units Read: 977 00:08:22.097 Data Units Written: 906 00:08:22.097 Host Read Commands: 41250 00:08:22.097 Host Write Commands: 40673 00:08:22.097 Controller Busy Time: 0 minutes 00:08:22.097 Power Cycles: 0 00:08:22.097 Power On Hours: 0 hours 00:08:22.097 Unsafe Shutdowns: 0 00:08:22.097 Unrecoverable Media Errors: 0 00:08:22.097 Lifetime Error Log Entries: 0 00:08:22.097 Warning Temperature Time: 0 minutes 00:08:22.097 Critical Temperature Time: 0 minutes 00:08:22.097 00:08:22.097 Number of Queues 00:08:22.097 ================ 00:08:22.097 Number of I/O Submission Queues: 64 00:08:22.097 Number of I/O Completion Queues: 64 00:08:22.097 00:08:22.097 ZNS Specific Controller Data 00:08:22.097 ============================ 00:08:22.097 Zone Append Size Limit: 0 00:08:22.097 00:08:22.097 00:08:22.097 Active Namespaces 00:08:22.097 ================= 00:08:22.097 Namespace ID:1 00:08:22.097 Error Recovery Timeout: Unlimited 00:08:22.097 Command Set Identifier: NVM (00h) 00:08:22.097 Deallocate: Supported 00:08:22.097 Deallocated/Unwritten Error: Supported 00:08:22.097 Deallocated Read Value: All 0x00 00:08:22.097 Deallocate in Write Zeroes: Not Supported 00:08:22.097 Deallocated Guard Field: 0xFFFF 00:08:22.097 Flush: Supported 00:08:22.097 Reservation: Not Supported 00:08:22.097 Namespace Sharing Capabilities: Multiple Controllers 00:08:22.097 Size (in LBAs): 262144 (1GiB) 00:08:22.097 Capacity (in LBAs): 262144 (1GiB) 00:08:22.097 Utilization (in LBAs): 262144 (1GiB) 00:08:22.097 Thin Provisioning: Not Supported 00:08:22.097 Per-NS Atomic Units: No 00:08:22.097 Maximum Single Source Range Length: 128 00:08:22.097 Maximum Copy Length: 128 00:08:22.097 Maximum Source Range Count: 128 00:08:22.097 NGUID/EUI64 Never Reused: No 00:08:22.097 Namespace Write Protected: No 00:08:22.097 Endurance group ID: 1 00:08:22.097 Number of LBA Formats: 8 00:08:22.097 Current LBA Format: LBA Format #04 00:08:22.097 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.097 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.097 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.097 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.097 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.097 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.097 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.097 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.097 00:08:22.097 Get Feature FDP: 00:08:22.097 ================ 00:08:22.097 Enabled: Yes 00:08:22.097 FDP configuration index: 0 00:08:22.097 00:08:22.097 FDP configurations log page 00:08:22.097 =========================== 00:08:22.097 Number of FDP configurations: 1 00:08:22.097 Version: 0 00:08:22.097 Size: 112 00:08:22.097 FDP Configuration Descriptor: 0 00:08:22.097 Descriptor Size: 96 00:08:22.097 Reclaim Group Identifier format: 2 00:08:22.097 FDP Volatile Write Cache: Not Present 00:08:22.097 FDP Configuration: Valid 00:08:22.097 Vendor Specific Size: 0 00:08:22.097 Number of Reclaim Groups: 2 00:08:22.097 Number of Recalim Unit Handles: 8 00:08:22.097 Max Placement Identifiers: 128 00:08:22.097 Number of Namespaces Suppprted: 256 00:08:22.097 Reclaim unit Nominal Size: 6000000 bytes 00:08:22.097 Estimated Reclaim Unit Time Limit: Not Reported 00:08:22.097 RUH Desc #000: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #001: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #002: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #003: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #004: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #005: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #006: RUH Type: Initially Isolated 00:08:22.097 RUH Desc #007: RUH Type: Initially Isolated 00:08:22.097 00:08:22.097 FDP reclaim unit handle usage log page 00:08:22.097 ====================================== 00:08:22.097 Number of Reclaim Unit Handles: 8 00:08:22.097 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:22.097 RUH Usage Desc #001: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #002: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #003: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #004: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #005: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #006: RUH Attributes: Unused 00:08:22.097 RUH Usage Desc #007: RUH Attributes: Unused 00:08:22.097 00:08:22.097 FDP statistics log page 00:08:22.097 ======================= 00:08:22.097 Host bytes with metadata written: 556179456 00:08:22.097 Medi[2024-10-08 18:19:10.863412] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 76057 terminated unexpected 00:08:22.097 a bytes with metadata written: 556257280 00:08:22.097 Media bytes erased: 0 00:08:22.097 00:08:22.097 FDP events log page 00:08:22.097 =================== 00:08:22.097 Number of FDP events: 0 00:08:22.097 00:08:22.097 NVM Specific Namespace Data 00:08:22.097 =========================== 00:08:22.097 Logical Block Storage Tag Mask: 0 00:08:22.097 Protection Information Capabilities: 00:08:22.097 16b Guard Protection Information Storage Tag Support: No 00:08:22.097 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.097 Storage Tag Check Read Support: No 00:08:22.097 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.097 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.098 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.098 ===================================================== 00:08:22.098 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.098 ===================================================== 00:08:22.098 Controller Capabilities/Features 00:08:22.098 ================================ 00:08:22.098 Vendor ID: 1b36 00:08:22.098 Subsystem Vendor ID: 1af4 00:08:22.098 Serial Number: 12342 00:08:22.098 Model Number: QEMU NVMe Ctrl 00:08:22.098 Firmware Version: 8.0.0 00:08:22.098 Recommended Arb Burst: 6 00:08:22.098 IEEE OUI Identifier: 00 54 52 00:08:22.098 Multi-path I/O 00:08:22.098 May have multiple subsystem ports: No 00:08:22.098 May have multiple controllers: No 00:08:22.098 Associated with SR-IOV VF: No 00:08:22.098 Max Data Transfer Size: 524288 00:08:22.098 Max Number of Namespaces: 256 00:08:22.098 Max Number of I/O Queues: 64 00:08:22.098 NVMe Specification Version (VS): 1.4 00:08:22.098 NVMe Specification Version (Identify): 1.4 00:08:22.098 Maximum Queue Entries: 2048 00:08:22.098 Contiguous Queues Required: Yes 00:08:22.098 Arbitration Mechanisms Supported 00:08:22.098 Weighted Round Robin: Not Supported 00:08:22.098 Vendor Specific: Not Supported 00:08:22.098 Reset Timeout: 7500 ms 00:08:22.098 Doorbell Stride: 4 bytes 00:08:22.098 NVM Subsystem Reset: Not Supported 00:08:22.098 Command Sets Supported 00:08:22.098 NVM Command Set: Supported 00:08:22.098 Boot Partition: Not Supported 00:08:22.098 Memory Page Size Minimum: 4096 bytes 00:08:22.098 Memory Page Size Maximum: 65536 bytes 00:08:22.098 Persistent Memory Region: Not Supported 00:08:22.098 Optional Asynchronous Events Supported 00:08:22.098 Namespace Attribute Notices: Supported 00:08:22.098 Firmware Activation Notices: Not Supported 00:08:22.098 ANA Change Notices: Not Supported 00:08:22.098 PLE Aggregate Log Change Notices: Not Supported 00:08:22.098 LBA Status Info Alert Notices: Not Supported 00:08:22.098 EGE Aggregate Log Change Notices: Not Supported 00:08:22.098 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.098 Zone Descriptor Change Notices: Not Supported 00:08:22.098 Discovery Log Change Notices: Not Supported 00:08:22.098 Controller Attributes 00:08:22.098 128-bit Host Identifier: Not Supported 00:08:22.098 Non-Operational Permissive Mode: Not Supported 00:08:22.098 NVM Sets: Not Supported 00:08:22.098 Read Recovery Levels: Not Supported 00:08:22.098 Endurance Groups: Not Supported 00:08:22.098 Predictable Latency Mode: Not Supported 00:08:22.098 Traffic Based Keep ALive: Not Supported 00:08:22.098 Namespace Granularity: Not Supported 00:08:22.098 SQ Associations: Not Supported 00:08:22.098 UUID List: Not Supported 00:08:22.098 Multi-Domain Subsystem: Not Supported 00:08:22.098 Fixed Capacity Management: Not Supported 00:08:22.098 Variable Capacity Management: Not Supported 00:08:22.098 Delete Endurance Group: Not Supported 00:08:22.098 Delete NVM Set: Not Supported 00:08:22.098 Extended LBA Formats Supported: Supported 00:08:22.098 Flexible Data Placement Supported: Not Supported 00:08:22.098 00:08:22.098 Controller Memory Buffer Support 00:08:22.098 ================================ 00:08:22.098 Supported: No 00:08:22.098 00:08:22.098 Persistent Memory Region Support 00:08:22.098 ================================ 00:08:22.098 Supported: No 00:08:22.098 00:08:22.098 Admin Command Set Attributes 00:08:22.098 ============================ 00:08:22.098 Security Send/Receive: Not Supported 00:08:22.098 Format NVM: Supported 00:08:22.098 Firmware Activate/Download: Not Supported 00:08:22.098 Namespace Management: Supported 00:08:22.098 Device Self-Test: Not Supported 00:08:22.098 Directives: Supported 00:08:22.098 NVMe-MI: Not Supported 00:08:22.098 Virtualization Management: Not Supported 00:08:22.098 Doorbell Buffer Config: Supported 00:08:22.098 Get LBA Status Capability: Not Supported 00:08:22.098 Command & Feature Lockdown Capability: Not Supported 00:08:22.098 Abort Command Limit: 4 00:08:22.098 Async Event Request Limit: 4 00:08:22.098 Number of Firmware Slots: N/A 00:08:22.098 Firmware Slot 1 Read-Only: N/A 00:08:22.098 Firmware Activation Without Reset: N/A 00:08:22.098 Multiple Update Detection Support: N/A 00:08:22.098 Firmware Update Granularity: No Information Provided 00:08:22.098 Per-Namespace SMART Log: Yes 00:08:22.098 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.098 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:22.098 Command Effects Log Page: Supported 00:08:22.098 Get Log Page Extended Data: Supported 00:08:22.098 Telemetry Log Pages: Not Supported 00:08:22.098 Persistent Event Log Pages: Not Supported 00:08:22.098 Supported Log Pages Log Page: May Support 00:08:22.098 Commands Supported & Effects Log Page: Not Supported 00:08:22.098 Feature Identifiers & Effects Log Page:May Support 00:08:22.098 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.098 Data Area 4 for Telemetry Log: Not Supported 00:08:22.098 Error Log Page Entries Supported: 1 00:08:22.098 Keep Alive: Not Supported 00:08:22.098 00:08:22.098 NVM Command Set Attributes 00:08:22.098 ========================== 00:08:22.098 Submission Queue Entry Size 00:08:22.098 Max: 64 00:08:22.098 Min: 64 00:08:22.098 Completion Queue Entry Size 00:08:22.098 Max: 16 00:08:22.098 Min: 16 00:08:22.098 Number of Namespaces: 256 00:08:22.098 Compare Command: Supported 00:08:22.098 Write Uncorrectable Command: Not Supported 00:08:22.098 Dataset Management Command: Supported 00:08:22.098 Write Zeroes Command: Supported 00:08:22.098 Set Features Save Field: Supported 00:08:22.098 Reservations: Not Supported 00:08:22.098 Timestamp: Supported 00:08:22.098 Copy: Supported 00:08:22.098 Volatile Write Cache: Present 00:08:22.098 Atomic Write Unit (Normal): 1 00:08:22.098 Atomic Write Unit (PFail): 1 00:08:22.098 Atomic Compare & Write Unit: 1 00:08:22.098 Fused Compare & Write: Not Supported 00:08:22.098 Scatter-Gather List 00:08:22.098 SGL Command Set: Supported 00:08:22.098 SGL Keyed: Not Supported 00:08:22.098 SGL Bit Bucket Descriptor: Not Supported 00:08:22.098 SGL Metadata Pointer: Not Supported 00:08:22.098 Oversized SGL: Not Supported 00:08:22.098 SGL Metadata Address: Not Supported 00:08:22.098 SGL Offset: Not Supported 00:08:22.098 Transport SGL Data Block: Not Supported 00:08:22.098 Replay Protected Memory Block: Not Supported 00:08:22.098 00:08:22.099 Firmware Slot Information 00:08:22.099 ========================= 00:08:22.099 Active slot: 1 00:08:22.099 Slot 1 Firmware Revision: 1.0 00:08:22.099 00:08:22.099 00:08:22.099 Commands Supported and Effects 00:08:22.099 ============================== 00:08:22.099 Admin Commands 00:08:22.099 -------------- 00:08:22.099 Delete I/O Submission Queue (00h): Supported 00:08:22.099 Create I/O Submission Queue (01h): Supported 00:08:22.099 Get Log Page (02h): Supported 00:08:22.099 Delete I/O Completion Queue (04h): Supported 00:08:22.099 Create I/O Completion Queue (05h): Supported 00:08:22.099 Identify (06h): Supported 00:08:22.099 Abort (08h): Supported 00:08:22.099 Set Features (09h): Supported 00:08:22.099 Get Features (0Ah): Supported 00:08:22.099 Asynchronous Event Request (0Ch): Supported 00:08:22.099 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.099 Directive Send (19h): Supported 00:08:22.099 Directive Receive (1Ah): Supported 00:08:22.099 Virtualization Management (1Ch): Supported 00:08:22.099 Doorbell Buffer Config (7Ch): Supported 00:08:22.099 Format NVM (80h): Supported LBA-Change 00:08:22.099 I/O Commands 00:08:22.099 ------------ 00:08:22.099 Flush (00h): Supported LBA-Change 00:08:22.099 Write (01h): Supported LBA-Change 00:08:22.099 Read (02h): Supported 00:08:22.099 Compare (05h): Supported 00:08:22.099 Write Zeroes (08h): Supported LBA-Change 00:08:22.099 Dataset Management (09h): Supported LBA-Change 00:08:22.099 Unknown (0Ch): Supported 00:08:22.099 Unknown (12h): Supported 00:08:22.099 Copy (19h): Supported LBA-Change 00:08:22.099 Unknown (1Dh): Supported LBA-Change 00:08:22.099 00:08:22.099 Error Log 00:08:22.099 ========= 00:08:22.099 00:08:22.099 Arbitration 00:08:22.099 =========== 00:08:22.099 Arbitration Burst: no limit 00:08:22.099 00:08:22.099 Power Management 00:08:22.099 ================ 00:08:22.099 Number of Power States: 1 00:08:22.099 Current Power State: Power State #0 00:08:22.099 Power State #0: 00:08:22.099 Max Power: 25.00 W 00:08:22.099 Non-Operational State: Operational 00:08:22.099 Entry Latency: 16 microseconds 00:08:22.099 Exit Latency: 4 microseconds 00:08:22.099 Relative Read Throughput: 0 00:08:22.099 Relative Read Latency: 0 00:08:22.099 Relative Write Throughput: 0 00:08:22.099 Relative Write Latency: 0 00:08:22.099 Idle Power: Not Reported 00:08:22.099 Active Power: Not Reported 00:08:22.099 Non-Operational Permissive Mode: Not Supported 00:08:22.099 00:08:22.099 Health Information 00:08:22.099 ================== 00:08:22.099 Critical Warnings: 00:08:22.099 Available Spare Space: OK 00:08:22.099 Temperature: OK 00:08:22.099 Device Reliability: OK 00:08:22.099 Read Only: No 00:08:22.099 Volatile Memory Backup: OK 00:08:22.099 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.099 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.099 Available Spare: 0% 00:08:22.099 Available Spare Threshold: 0% 00:08:22.099 Life Percentage Used: 0% 00:08:22.099 Data Units Read: 2299 00:08:22.099 Data Units Written: 2087 00:08:22.099 Host Read Commands: 118704 00:08:22.099 Host Write Commands: 116973 00:08:22.099 Controller Busy Time: 0 minutes 00:08:22.099 Power Cycles: 0 00:08:22.099 Power On Hours: 0 hours 00:08:22.099 Unsafe Shutdowns: 0 00:08:22.099 Unrecoverable Media Errors: 0 00:08:22.099 Lifetime Error Log Entries: 0 00:08:22.099 Warning Temperature Time: 0 minutes 00:08:22.099 Critical Temperature Time: 0 minutes 00:08:22.099 00:08:22.099 Number of Queues 00:08:22.099 ================ 00:08:22.099 Number of I/O Submission Queues: 64 00:08:22.099 Number of I/O Completion Queues: 64 00:08:22.099 00:08:22.099 ZNS Specific Controller Data 00:08:22.099 ============================ 00:08:22.099 Zone Append Size Limit: 0 00:08:22.099 00:08:22.099 00:08:22.099 Active Namespaces 00:08:22.099 ================= 00:08:22.099 Namespace ID:1 00:08:22.099 Error Recovery Timeout: Unlimited 00:08:22.099 Command Set Identifier: NVM (00h) 00:08:22.099 Deallocate: Supported 00:08:22.099 Deallocated/Unwritten Error: Supported 00:08:22.099 Deallocated Read Value: All 0x00 00:08:22.099 Deallocate in Write Zeroes: Not Supported 00:08:22.099 Deallocated Guard Field: 0xFFFF 00:08:22.099 Flush: Supported 00:08:22.099 Reservation: Not Supported 00:08:22.099 Namespace Sharing Capabilities: Private 00:08:22.099 Size (in LBAs): 1048576 (4GiB) 00:08:22.099 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.099 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.099 Thin Provisioning: Not Supported 00:08:22.099 Per-NS Atomic Units: No 00:08:22.099 Maximum Single Source Range Length: 128 00:08:22.099 Maximum Copy Length: 128 00:08:22.099 Maximum Source Range Count: 128 00:08:22.099 NGUID/EUI64 Never Reused: No 00:08:22.099 Namespace Write Protected: No 00:08:22.099 Number of LBA Formats: 8 00:08:22.099 Current LBA Format: LBA Format #04 00:08:22.099 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.099 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.099 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.099 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.099 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.099 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.099 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.099 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.099 00:08:22.099 NVM Specific Namespace Data 00:08:22.099 =========================== 00:08:22.099 Logical Block Storage Tag Mask: 0 00:08:22.099 Protection Information Capabilities: 00:08:22.099 16b Guard Protection Information Storage Tag Support: No 00:08:22.099 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.099 Storage Tag Check Read Support: No 00:08:22.099 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.099 Namespace ID:2 00:08:22.099 Error Recovery Timeout: Unlimited 00:08:22.099 Command Set Identifier: NVM (00h) 00:08:22.099 Deallocate: Supported 00:08:22.099 Deallocated/Unwritten Error: Supported 00:08:22.099 Deallocated Read Value: All 0x00 00:08:22.099 Deallocate in Write Zeroes: Not Supported 00:08:22.099 Deallocated Guard Field: 0xFFFF 00:08:22.099 Flush: Supported 00:08:22.099 Reservation: Not Supported 00:08:22.099 Namespace Sharing Capabilities: Private 00:08:22.099 Size (in LBAs): 1048576 (4GiB) 00:08:22.099 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.099 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.099 Thin Provisioning: Not Supported 00:08:22.099 Per-NS Atomic Units: No 00:08:22.099 Maximum Single Source Range Length: 128 00:08:22.100 Maximum Copy Length: 128 00:08:22.100 Maximum Source Range Count: 128 00:08:22.100 NGUID/EUI64 Never Reused: No 00:08:22.100 Namespace Write Protected: No 00:08:22.100 Number of LBA Formats: 8 00:08:22.100 Current LBA Format: LBA Format #04 00:08:22.100 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.100 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.100 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.100 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.100 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.100 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.100 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.100 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.100 00:08:22.100 NVM Specific Namespace Data 00:08:22.100 =========================== 00:08:22.100 Logical Block Storage Tag Mask: 0 00:08:22.100 Protection Information Capabilities: 00:08:22.100 16b Guard Protection Information Storage Tag Support: No 00:08:22.100 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.100 Storage Tag Check Read Support: No 00:08:22.100 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Namespace ID:3 00:08:22.100 Error Recovery Timeout: Unlimited 00:08:22.100 Command Set Identifier: NVM (00h) 00:08:22.100 Deallocate: Supported 00:08:22.100 Deallocated/Unwritten Error: Supported 00:08:22.100 Deallocated Read Value: All 0x00 00:08:22.100 Deallocate in Write Zeroes: Not Supported 00:08:22.100 Deallocated Guard Field: 0xFFFF 00:08:22.100 Flush: Supported 00:08:22.100 Reservation: Not Supported 00:08:22.100 Namespace Sharing Capabilities: Private 00:08:22.100 Size (in LBAs): 1048576 (4GiB) 00:08:22.100 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.100 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.100 Thin Provisioning: Not Supported 00:08:22.100 Per-NS Atomic Units: No 00:08:22.100 Maximum Single Source Range Length: 128 00:08:22.100 Maximum Copy Length: 128 00:08:22.100 Maximum Source Range Count: 128 00:08:22.100 NGUID/EUI64 Never Reused: No 00:08:22.100 Namespace Write Protected: No 00:08:22.100 Number of LBA Formats: 8 00:08:22.100 Current LBA Format: LBA Format #04 00:08:22.100 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.100 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.100 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.100 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.100 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.100 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.100 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.100 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.100 00:08:22.100 NVM Specific Namespace Data 00:08:22.100 =========================== 00:08:22.100 Logical Block Storage Tag Mask: 0 00:08:22.100 Protection Information Capabilities: 00:08:22.100 16b Guard Protection Information Storage Tag Support: No 00:08:22.100 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.100 Storage Tag Check Read Support: No 00:08:22.100 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.100 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:22.100 18:19:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:22.364 ===================================================== 00:08:22.364 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.364 ===================================================== 00:08:22.364 Controller Capabilities/Features 00:08:22.364 ================================ 00:08:22.364 Vendor ID: 1b36 00:08:22.364 Subsystem Vendor ID: 1af4 00:08:22.364 Serial Number: 12340 00:08:22.364 Model Number: QEMU NVMe Ctrl 00:08:22.364 Firmware Version: 8.0.0 00:08:22.364 Recommended Arb Burst: 6 00:08:22.364 IEEE OUI Identifier: 00 54 52 00:08:22.364 Multi-path I/O 00:08:22.364 May have multiple subsystem ports: No 00:08:22.364 May have multiple controllers: No 00:08:22.364 Associated with SR-IOV VF: No 00:08:22.364 Max Data Transfer Size: 524288 00:08:22.364 Max Number of Namespaces: 256 00:08:22.364 Max Number of I/O Queues: 64 00:08:22.364 NVMe Specification Version (VS): 1.4 00:08:22.364 NVMe Specification Version (Identify): 1.4 00:08:22.364 Maximum Queue Entries: 2048 00:08:22.364 Contiguous Queues Required: Yes 00:08:22.364 Arbitration Mechanisms Supported 00:08:22.364 Weighted Round Robin: Not Supported 00:08:22.364 Vendor Specific: Not Supported 00:08:22.364 Reset Timeout: 7500 ms 00:08:22.364 Doorbell Stride: 4 bytes 00:08:22.364 NVM Subsystem Reset: Not Supported 00:08:22.364 Command Sets Supported 00:08:22.364 NVM Command Set: Supported 00:08:22.364 Boot Partition: Not Supported 00:08:22.364 Memory Page Size Minimum: 4096 bytes 00:08:22.364 Memory Page Size Maximum: 65536 bytes 00:08:22.364 Persistent Memory Region: Not Supported 00:08:22.364 Optional Asynchronous Events Supported 00:08:22.364 Namespace Attribute Notices: Supported 00:08:22.364 Firmware Activation Notices: Not Supported 00:08:22.364 ANA Change Notices: Not Supported 00:08:22.364 PLE Aggregate Log Change Notices: Not Supported 00:08:22.364 LBA Status Info Alert Notices: Not Supported 00:08:22.364 EGE Aggregate Log Change Notices: Not Supported 00:08:22.364 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.364 Zone Descriptor Change Notices: Not Supported 00:08:22.364 Discovery Log Change Notices: Not Supported 00:08:22.364 Controller Attributes 00:08:22.364 128-bit Host Identifier: Not Supported 00:08:22.364 Non-Operational Permissive Mode: Not Supported 00:08:22.364 NVM Sets: Not Supported 00:08:22.364 Read Recovery Levels: Not Supported 00:08:22.364 Endurance Groups: Not Supported 00:08:22.364 Predictable Latency Mode: Not Supported 00:08:22.364 Traffic Based Keep ALive: Not Supported 00:08:22.364 Namespace Granularity: Not Supported 00:08:22.364 SQ Associations: Not Supported 00:08:22.364 UUID List: Not Supported 00:08:22.364 Multi-Domain Subsystem: Not Supported 00:08:22.364 Fixed Capacity Management: Not Supported 00:08:22.364 Variable Capacity Management: Not Supported 00:08:22.364 Delete Endurance Group: Not Supported 00:08:22.364 Delete NVM Set: Not Supported 00:08:22.364 Extended LBA Formats Supported: Supported 00:08:22.364 Flexible Data Placement Supported: Not Supported 00:08:22.364 00:08:22.364 Controller Memory Buffer Support 00:08:22.364 ================================ 00:08:22.364 Supported: No 00:08:22.364 00:08:22.364 Persistent Memory Region Support 00:08:22.364 ================================ 00:08:22.364 Supported: No 00:08:22.364 00:08:22.364 Admin Command Set Attributes 00:08:22.364 ============================ 00:08:22.364 Security Send/Receive: Not Supported 00:08:22.364 Format NVM: Supported 00:08:22.364 Firmware Activate/Download: Not Supported 00:08:22.364 Namespace Management: Supported 00:08:22.364 Device Self-Test: Not Supported 00:08:22.364 Directives: Supported 00:08:22.364 NVMe-MI: Not Supported 00:08:22.364 Virtualization Management: Not Supported 00:08:22.364 Doorbell Buffer Config: Supported 00:08:22.364 Get LBA Status Capability: Not Supported 00:08:22.364 Command & Feature Lockdown Capability: Not Supported 00:08:22.364 Abort Command Limit: 4 00:08:22.364 Async Event Request Limit: 4 00:08:22.364 Number of Firmware Slots: N/A 00:08:22.364 Firmware Slot 1 Read-Only: N/A 00:08:22.364 Firmware Activation Without Reset: N/A 00:08:22.364 Multiple Update Detection Support: N/A 00:08:22.364 Firmware Update Granularity: No Information Provided 00:08:22.364 Per-Namespace SMART Log: Yes 00:08:22.364 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.364 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:22.364 Command Effects Log Page: Supported 00:08:22.364 Get Log Page Extended Data: Supported 00:08:22.364 Telemetry Log Pages: Not Supported 00:08:22.364 Persistent Event Log Pages: Not Supported 00:08:22.364 Supported Log Pages Log Page: May Support 00:08:22.364 Commands Supported & Effects Log Page: Not Supported 00:08:22.364 Feature Identifiers & Effects Log Page:May Support 00:08:22.364 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.364 Data Area 4 for Telemetry Log: Not Supported 00:08:22.364 Error Log Page Entries Supported: 1 00:08:22.365 Keep Alive: Not Supported 00:08:22.365 00:08:22.365 NVM Command Set Attributes 00:08:22.365 ========================== 00:08:22.365 Submission Queue Entry Size 00:08:22.365 Max: 64 00:08:22.365 Min: 64 00:08:22.365 Completion Queue Entry Size 00:08:22.365 Max: 16 00:08:22.365 Min: 16 00:08:22.365 Number of Namespaces: 256 00:08:22.365 Compare Command: Supported 00:08:22.365 Write Uncorrectable Command: Not Supported 00:08:22.365 Dataset Management Command: Supported 00:08:22.365 Write Zeroes Command: Supported 00:08:22.365 Set Features Save Field: Supported 00:08:22.365 Reservations: Not Supported 00:08:22.365 Timestamp: Supported 00:08:22.365 Copy: Supported 00:08:22.365 Volatile Write Cache: Present 00:08:22.365 Atomic Write Unit (Normal): 1 00:08:22.365 Atomic Write Unit (PFail): 1 00:08:22.365 Atomic Compare & Write Unit: 1 00:08:22.365 Fused Compare & Write: Not Supported 00:08:22.365 Scatter-Gather List 00:08:22.365 SGL Command Set: Supported 00:08:22.365 SGL Keyed: Not Supported 00:08:22.365 SGL Bit Bucket Descriptor: Not Supported 00:08:22.365 SGL Metadata Pointer: Not Supported 00:08:22.365 Oversized SGL: Not Supported 00:08:22.365 SGL Metadata Address: Not Supported 00:08:22.365 SGL Offset: Not Supported 00:08:22.365 Transport SGL Data Block: Not Supported 00:08:22.365 Replay Protected Memory Block: Not Supported 00:08:22.365 00:08:22.365 Firmware Slot Information 00:08:22.365 ========================= 00:08:22.365 Active slot: 1 00:08:22.365 Slot 1 Firmware Revision: 1.0 00:08:22.365 00:08:22.365 00:08:22.365 Commands Supported and Effects 00:08:22.365 ============================== 00:08:22.365 Admin Commands 00:08:22.365 -------------- 00:08:22.365 Delete I/O Submission Queue (00h): Supported 00:08:22.365 Create I/O Submission Queue (01h): Supported 00:08:22.365 Get Log Page (02h): Supported 00:08:22.365 Delete I/O Completion Queue (04h): Supported 00:08:22.365 Create I/O Completion Queue (05h): Supported 00:08:22.365 Identify (06h): Supported 00:08:22.365 Abort (08h): Supported 00:08:22.365 Set Features (09h): Supported 00:08:22.365 Get Features (0Ah): Supported 00:08:22.365 Asynchronous Event Request (0Ch): Supported 00:08:22.365 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.365 Directive Send (19h): Supported 00:08:22.365 Directive Receive (1Ah): Supported 00:08:22.365 Virtualization Management (1Ch): Supported 00:08:22.365 Doorbell Buffer Config (7Ch): Supported 00:08:22.365 Format NVM (80h): Supported LBA-Change 00:08:22.365 I/O Commands 00:08:22.365 ------------ 00:08:22.365 Flush (00h): Supported LBA-Change 00:08:22.365 Write (01h): Supported LBA-Change 00:08:22.365 Read (02h): Supported 00:08:22.365 Compare (05h): Supported 00:08:22.365 Write Zeroes (08h): Supported LBA-Change 00:08:22.365 Dataset Management (09h): Supported LBA-Change 00:08:22.365 Unknown (0Ch): Supported 00:08:22.365 Unknown (12h): Supported 00:08:22.365 Copy (19h): Supported LBA-Change 00:08:22.365 Unknown (1Dh): Supported LBA-Change 00:08:22.365 00:08:22.365 Error Log 00:08:22.365 ========= 00:08:22.365 00:08:22.365 Arbitration 00:08:22.365 =========== 00:08:22.365 Arbitration Burst: no limit 00:08:22.365 00:08:22.365 Power Management 00:08:22.365 ================ 00:08:22.365 Number of Power States: 1 00:08:22.365 Current Power State: Power State #0 00:08:22.365 Power State #0: 00:08:22.365 Max Power: 25.00 W 00:08:22.365 Non-Operational State: Operational 00:08:22.365 Entry Latency: 16 microseconds 00:08:22.365 Exit Latency: 4 microseconds 00:08:22.365 Relative Read Throughput: 0 00:08:22.365 Relative Read Latency: 0 00:08:22.365 Relative Write Throughput: 0 00:08:22.365 Relative Write Latency: 0 00:08:22.365 Idle Power: Not Reported 00:08:22.365 Active Power: Not Reported 00:08:22.365 Non-Operational Permissive Mode: Not Supported 00:08:22.365 00:08:22.365 Health Information 00:08:22.365 ================== 00:08:22.365 Critical Warnings: 00:08:22.365 Available Spare Space: OK 00:08:22.365 Temperature: OK 00:08:22.365 Device Reliability: OK 00:08:22.365 Read Only: No 00:08:22.365 Volatile Memory Backup: OK 00:08:22.365 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.365 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.365 Available Spare: 0% 00:08:22.365 Available Spare Threshold: 0% 00:08:22.365 Life Percentage Used: 0% 00:08:22.365 Data Units Read: 691 00:08:22.365 Data Units Written: 619 00:08:22.365 Host Read Commands: 38476 00:08:22.365 Host Write Commands: 38262 00:08:22.365 Controller Busy Time: 0 minutes 00:08:22.365 Power Cycles: 0 00:08:22.365 Power On Hours: 0 hours 00:08:22.365 Unsafe Shutdowns: 0 00:08:22.365 Unrecoverable Media Errors: 0 00:08:22.365 Lifetime Error Log Entries: 0 00:08:22.365 Warning Temperature Time: 0 minutes 00:08:22.365 Critical Temperature Time: 0 minutes 00:08:22.365 00:08:22.365 Number of Queues 00:08:22.365 ================ 00:08:22.365 Number of I/O Submission Queues: 64 00:08:22.365 Number of I/O Completion Queues: 64 00:08:22.365 00:08:22.365 ZNS Specific Controller Data 00:08:22.365 ============================ 00:08:22.365 Zone Append Size Limit: 0 00:08:22.365 00:08:22.365 00:08:22.365 Active Namespaces 00:08:22.365 ================= 00:08:22.365 Namespace ID:1 00:08:22.365 Error Recovery Timeout: Unlimited 00:08:22.365 Command Set Identifier: NVM (00h) 00:08:22.365 Deallocate: Supported 00:08:22.365 Deallocated/Unwritten Error: Supported 00:08:22.365 Deallocated Read Value: All 0x00 00:08:22.365 Deallocate in Write Zeroes: Not Supported 00:08:22.365 Deallocated Guard Field: 0xFFFF 00:08:22.365 Flush: Supported 00:08:22.365 Reservation: Not Supported 00:08:22.365 Metadata Transferred as: Separate Metadata Buffer 00:08:22.365 Namespace Sharing Capabilities: Private 00:08:22.365 Size (in LBAs): 1548666 (5GiB) 00:08:22.365 Capacity (in LBAs): 1548666 (5GiB) 00:08:22.365 Utilization (in LBAs): 1548666 (5GiB) 00:08:22.365 Thin Provisioning: Not Supported 00:08:22.365 Per-NS Atomic Units: No 00:08:22.365 Maximum Single Source Range Length: 128 00:08:22.365 Maximum Copy Length: 128 00:08:22.365 Maximum Source Range Count: 128 00:08:22.365 NGUID/EUI64 Never Reused: No 00:08:22.365 Namespace Write Protected: No 00:08:22.365 Number of LBA Formats: 8 00:08:22.365 Current LBA Format: LBA Format #07 00:08:22.365 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.365 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.365 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.365 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.365 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.365 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.365 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.365 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.365 00:08:22.365 NVM Specific Namespace Data 00:08:22.365 =========================== 00:08:22.365 Logical Block Storage Tag Mask: 0 00:08:22.365 Protection Information Capabilities: 00:08:22.365 16b Guard Protection Information Storage Tag Support: No 00:08:22.365 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.365 Storage Tag Check Read Support: No 00:08:22.365 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.365 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.366 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:22.366 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:22.629 ===================================================== 00:08:22.629 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.629 ===================================================== 00:08:22.629 Controller Capabilities/Features 00:08:22.629 ================================ 00:08:22.629 Vendor ID: 1b36 00:08:22.629 Subsystem Vendor ID: 1af4 00:08:22.629 Serial Number: 12341 00:08:22.629 Model Number: QEMU NVMe Ctrl 00:08:22.629 Firmware Version: 8.0.0 00:08:22.629 Recommended Arb Burst: 6 00:08:22.629 IEEE OUI Identifier: 00 54 52 00:08:22.629 Multi-path I/O 00:08:22.629 May have multiple subsystem ports: No 00:08:22.629 May have multiple controllers: No 00:08:22.629 Associated with SR-IOV VF: No 00:08:22.629 Max Data Transfer Size: 524288 00:08:22.629 Max Number of Namespaces: 256 00:08:22.629 Max Number of I/O Queues: 64 00:08:22.629 NVMe Specification Version (VS): 1.4 00:08:22.629 NVMe Specification Version (Identify): 1.4 00:08:22.629 Maximum Queue Entries: 2048 00:08:22.629 Contiguous Queues Required: Yes 00:08:22.629 Arbitration Mechanisms Supported 00:08:22.629 Weighted Round Robin: Not Supported 00:08:22.629 Vendor Specific: Not Supported 00:08:22.629 Reset Timeout: 7500 ms 00:08:22.629 Doorbell Stride: 4 bytes 00:08:22.629 NVM Subsystem Reset: Not Supported 00:08:22.629 Command Sets Supported 00:08:22.629 NVM Command Set: Supported 00:08:22.629 Boot Partition: Not Supported 00:08:22.629 Memory Page Size Minimum: 4096 bytes 00:08:22.629 Memory Page Size Maximum: 65536 bytes 00:08:22.629 Persistent Memory Region: Not Supported 00:08:22.629 Optional Asynchronous Events Supported 00:08:22.629 Namespace Attribute Notices: Supported 00:08:22.629 Firmware Activation Notices: Not Supported 00:08:22.629 ANA Change Notices: Not Supported 00:08:22.629 PLE Aggregate Log Change Notices: Not Supported 00:08:22.629 LBA Status Info Alert Notices: Not Supported 00:08:22.629 EGE Aggregate Log Change Notices: Not Supported 00:08:22.629 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.629 Zone Descriptor Change Notices: Not Supported 00:08:22.629 Discovery Log Change Notices: Not Supported 00:08:22.629 Controller Attributes 00:08:22.629 128-bit Host Identifier: Not Supported 00:08:22.629 Non-Operational Permissive Mode: Not Supported 00:08:22.629 NVM Sets: Not Supported 00:08:22.629 Read Recovery Levels: Not Supported 00:08:22.629 Endurance Groups: Not Supported 00:08:22.629 Predictable Latency Mode: Not Supported 00:08:22.629 Traffic Based Keep ALive: Not Supported 00:08:22.629 Namespace Granularity: Not Supported 00:08:22.629 SQ Associations: Not Supported 00:08:22.629 UUID List: Not Supported 00:08:22.629 Multi-Domain Subsystem: Not Supported 00:08:22.629 Fixed Capacity Management: Not Supported 00:08:22.629 Variable Capacity Management: Not Supported 00:08:22.629 Delete Endurance Group: Not Supported 00:08:22.629 Delete NVM Set: Not Supported 00:08:22.629 Extended LBA Formats Supported: Supported 00:08:22.629 Flexible Data Placement Supported: Not Supported 00:08:22.629 00:08:22.629 Controller Memory Buffer Support 00:08:22.629 ================================ 00:08:22.629 Supported: No 00:08:22.629 00:08:22.629 Persistent Memory Region Support 00:08:22.629 ================================ 00:08:22.629 Supported: No 00:08:22.629 00:08:22.629 Admin Command Set Attributes 00:08:22.629 ============================ 00:08:22.629 Security Send/Receive: Not Supported 00:08:22.629 Format NVM: Supported 00:08:22.629 Firmware Activate/Download: Not Supported 00:08:22.629 Namespace Management: Supported 00:08:22.630 Device Self-Test: Not Supported 00:08:22.630 Directives: Supported 00:08:22.630 NVMe-MI: Not Supported 00:08:22.630 Virtualization Management: Not Supported 00:08:22.630 Doorbell Buffer Config: Supported 00:08:22.630 Get LBA Status Capability: Not Supported 00:08:22.630 Command & Feature Lockdown Capability: Not Supported 00:08:22.630 Abort Command Limit: 4 00:08:22.630 Async Event Request Limit: 4 00:08:22.630 Number of Firmware Slots: N/A 00:08:22.630 Firmware Slot 1 Read-Only: N/A 00:08:22.630 Firmware Activation Without Reset: N/A 00:08:22.630 Multiple Update Detection Support: N/A 00:08:22.630 Firmware Update Granularity: No Information Provided 00:08:22.630 Per-Namespace SMART Log: Yes 00:08:22.630 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.630 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:22.630 Command Effects Log Page: Supported 00:08:22.630 Get Log Page Extended Data: Supported 00:08:22.630 Telemetry Log Pages: Not Supported 00:08:22.630 Persistent Event Log Pages: Not Supported 00:08:22.630 Supported Log Pages Log Page: May Support 00:08:22.630 Commands Supported & Effects Log Page: Not Supported 00:08:22.630 Feature Identifiers & Effects Log Page:May Support 00:08:22.630 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.630 Data Area 4 for Telemetry Log: Not Supported 00:08:22.630 Error Log Page Entries Supported: 1 00:08:22.630 Keep Alive: Not Supported 00:08:22.630 00:08:22.630 NVM Command Set Attributes 00:08:22.630 ========================== 00:08:22.630 Submission Queue Entry Size 00:08:22.630 Max: 64 00:08:22.630 Min: 64 00:08:22.630 Completion Queue Entry Size 00:08:22.630 Max: 16 00:08:22.630 Min: 16 00:08:22.630 Number of Namespaces: 256 00:08:22.630 Compare Command: Supported 00:08:22.630 Write Uncorrectable Command: Not Supported 00:08:22.630 Dataset Management Command: Supported 00:08:22.630 Write Zeroes Command: Supported 00:08:22.630 Set Features Save Field: Supported 00:08:22.630 Reservations: Not Supported 00:08:22.630 Timestamp: Supported 00:08:22.630 Copy: Supported 00:08:22.630 Volatile Write Cache: Present 00:08:22.630 Atomic Write Unit (Normal): 1 00:08:22.630 Atomic Write Unit (PFail): 1 00:08:22.630 Atomic Compare & Write Unit: 1 00:08:22.630 Fused Compare & Write: Not Supported 00:08:22.630 Scatter-Gather List 00:08:22.630 SGL Command Set: Supported 00:08:22.630 SGL Keyed: Not Supported 00:08:22.630 SGL Bit Bucket Descriptor: Not Supported 00:08:22.630 SGL Metadata Pointer: Not Supported 00:08:22.630 Oversized SGL: Not Supported 00:08:22.630 SGL Metadata Address: Not Supported 00:08:22.630 SGL Offset: Not Supported 00:08:22.630 Transport SGL Data Block: Not Supported 00:08:22.630 Replay Protected Memory Block: Not Supported 00:08:22.630 00:08:22.630 Firmware Slot Information 00:08:22.630 ========================= 00:08:22.630 Active slot: 1 00:08:22.630 Slot 1 Firmware Revision: 1.0 00:08:22.630 00:08:22.630 00:08:22.630 Commands Supported and Effects 00:08:22.630 ============================== 00:08:22.630 Admin Commands 00:08:22.630 -------------- 00:08:22.630 Delete I/O Submission Queue (00h): Supported 00:08:22.630 Create I/O Submission Queue (01h): Supported 00:08:22.630 Get Log Page (02h): Supported 00:08:22.630 Delete I/O Completion Queue (04h): Supported 00:08:22.630 Create I/O Completion Queue (05h): Supported 00:08:22.630 Identify (06h): Supported 00:08:22.630 Abort (08h): Supported 00:08:22.630 Set Features (09h): Supported 00:08:22.630 Get Features (0Ah): Supported 00:08:22.630 Asynchronous Event Request (0Ch): Supported 00:08:22.630 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.630 Directive Send (19h): Supported 00:08:22.630 Directive Receive (1Ah): Supported 00:08:22.630 Virtualization Management (1Ch): Supported 00:08:22.630 Doorbell Buffer Config (7Ch): Supported 00:08:22.630 Format NVM (80h): Supported LBA-Change 00:08:22.630 I/O Commands 00:08:22.630 ------------ 00:08:22.630 Flush (00h): Supported LBA-Change 00:08:22.630 Write (01h): Supported LBA-Change 00:08:22.630 Read (02h): Supported 00:08:22.630 Compare (05h): Supported 00:08:22.630 Write Zeroes (08h): Supported LBA-Change 00:08:22.630 Dataset Management (09h): Supported LBA-Change 00:08:22.630 Unknown (0Ch): Supported 00:08:22.630 Unknown (12h): Supported 00:08:22.630 Copy (19h): Supported LBA-Change 00:08:22.630 Unknown (1Dh): Supported LBA-Change 00:08:22.630 00:08:22.630 Error Log 00:08:22.630 ========= 00:08:22.630 00:08:22.630 Arbitration 00:08:22.630 =========== 00:08:22.630 Arbitration Burst: no limit 00:08:22.630 00:08:22.630 Power Management 00:08:22.630 ================ 00:08:22.630 Number of Power States: 1 00:08:22.630 Current Power State: Power State #0 00:08:22.630 Power State #0: 00:08:22.630 Max Power: 25.00 W 00:08:22.630 Non-Operational State: Operational 00:08:22.630 Entry Latency: 16 microseconds 00:08:22.630 Exit Latency: 4 microseconds 00:08:22.630 Relative Read Throughput: 0 00:08:22.630 Relative Read Latency: 0 00:08:22.630 Relative Write Throughput: 0 00:08:22.630 Relative Write Latency: 0 00:08:22.631 Idle Power: Not Reported 00:08:22.631 Active Power: Not Reported 00:08:22.631 Non-Operational Permissive Mode: Not Supported 00:08:22.631 00:08:22.631 Health Information 00:08:22.631 ================== 00:08:22.631 Critical Warnings: 00:08:22.631 Available Spare Space: OK 00:08:22.631 Temperature: OK 00:08:22.631 Device Reliability: OK 00:08:22.631 Read Only: No 00:08:22.631 Volatile Memory Backup: OK 00:08:22.631 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.631 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.631 Available Spare: 0% 00:08:22.631 Available Spare Threshold: 0% 00:08:22.631 Life Percentage Used: 0% 00:08:22.631 Data Units Read: 1050 00:08:22.631 Data Units Written: 915 00:08:22.631 Host Read Commands: 57794 00:08:22.631 Host Write Commands: 56535 00:08:22.631 Controller Busy Time: 0 minutes 00:08:22.631 Power Cycles: 0 00:08:22.631 Power On Hours: 0 hours 00:08:22.631 Unsafe Shutdowns: 0 00:08:22.631 Unrecoverable Media Errors: 0 00:08:22.631 Lifetime Error Log Entries: 0 00:08:22.631 Warning Temperature Time: 0 minutes 00:08:22.631 Critical Temperature Time: 0 minutes 00:08:22.631 00:08:22.631 Number of Queues 00:08:22.631 ================ 00:08:22.631 Number of I/O Submission Queues: 64 00:08:22.631 Number of I/O Completion Queues: 64 00:08:22.631 00:08:22.631 ZNS Specific Controller Data 00:08:22.631 ============================ 00:08:22.631 Zone Append Size Limit: 0 00:08:22.631 00:08:22.631 00:08:22.631 Active Namespaces 00:08:22.631 ================= 00:08:22.631 Namespace ID:1 00:08:22.631 Error Recovery Timeout: Unlimited 00:08:22.631 Command Set Identifier: NVM (00h) 00:08:22.631 Deallocate: Supported 00:08:22.631 Deallocated/Unwritten Error: Supported 00:08:22.631 Deallocated Read Value: All 0x00 00:08:22.631 Deallocate in Write Zeroes: Not Supported 00:08:22.631 Deallocated Guard Field: 0xFFFF 00:08:22.631 Flush: Supported 00:08:22.631 Reservation: Not Supported 00:08:22.631 Namespace Sharing Capabilities: Private 00:08:22.631 Size (in LBAs): 1310720 (5GiB) 00:08:22.631 Capacity (in LBAs): 1310720 (5GiB) 00:08:22.631 Utilization (in LBAs): 1310720 (5GiB) 00:08:22.631 Thin Provisioning: Not Supported 00:08:22.631 Per-NS Atomic Units: No 00:08:22.631 Maximum Single Source Range Length: 128 00:08:22.631 Maximum Copy Length: 128 00:08:22.631 Maximum Source Range Count: 128 00:08:22.631 NGUID/EUI64 Never Reused: No 00:08:22.631 Namespace Write Protected: No 00:08:22.631 Number of LBA Formats: 8 00:08:22.631 Current LBA Format: LBA Format #04 00:08:22.631 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.631 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.631 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.631 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.631 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.631 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.631 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.631 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.631 00:08:22.631 NVM Specific Namespace Data 00:08:22.631 =========================== 00:08:22.631 Logical Block Storage Tag Mask: 0 00:08:22.631 Protection Information Capabilities: 00:08:22.631 16b Guard Protection Information Storage Tag Support: No 00:08:22.631 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.631 Storage Tag Check Read Support: No 00:08:22.631 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.631 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:22.631 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:22.894 ===================================================== 00:08:22.894 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.894 ===================================================== 00:08:22.894 Controller Capabilities/Features 00:08:22.894 ================================ 00:08:22.894 Vendor ID: 1b36 00:08:22.894 Subsystem Vendor ID: 1af4 00:08:22.894 Serial Number: 12342 00:08:22.894 Model Number: QEMU NVMe Ctrl 00:08:22.894 Firmware Version: 8.0.0 00:08:22.894 Recommended Arb Burst: 6 00:08:22.894 IEEE OUI Identifier: 00 54 52 00:08:22.894 Multi-path I/O 00:08:22.894 May have multiple subsystem ports: No 00:08:22.894 May have multiple controllers: No 00:08:22.894 Associated with SR-IOV VF: No 00:08:22.894 Max Data Transfer Size: 524288 00:08:22.894 Max Number of Namespaces: 256 00:08:22.894 Max Number of I/O Queues: 64 00:08:22.894 NVMe Specification Version (VS): 1.4 00:08:22.894 NVMe Specification Version (Identify): 1.4 00:08:22.894 Maximum Queue Entries: 2048 00:08:22.894 Contiguous Queues Required: Yes 00:08:22.894 Arbitration Mechanisms Supported 00:08:22.894 Weighted Round Robin: Not Supported 00:08:22.894 Vendor Specific: Not Supported 00:08:22.894 Reset Timeout: 7500 ms 00:08:22.894 Doorbell Stride: 4 bytes 00:08:22.894 NVM Subsystem Reset: Not Supported 00:08:22.894 Command Sets Supported 00:08:22.894 NVM Command Set: Supported 00:08:22.894 Boot Partition: Not Supported 00:08:22.894 Memory Page Size Minimum: 4096 bytes 00:08:22.895 Memory Page Size Maximum: 65536 bytes 00:08:22.895 Persistent Memory Region: Not Supported 00:08:22.895 Optional Asynchronous Events Supported 00:08:22.895 Namespace Attribute Notices: Supported 00:08:22.895 Firmware Activation Notices: Not Supported 00:08:22.895 ANA Change Notices: Not Supported 00:08:22.895 PLE Aggregate Log Change Notices: Not Supported 00:08:22.895 LBA Status Info Alert Notices: Not Supported 00:08:22.895 EGE Aggregate Log Change Notices: Not Supported 00:08:22.895 Normal NVM Subsystem Shutdown event: Not Supported 00:08:22.895 Zone Descriptor Change Notices: Not Supported 00:08:22.895 Discovery Log Change Notices: Not Supported 00:08:22.895 Controller Attributes 00:08:22.895 128-bit Host Identifier: Not Supported 00:08:22.895 Non-Operational Permissive Mode: Not Supported 00:08:22.895 NVM Sets: Not Supported 00:08:22.895 Read Recovery Levels: Not Supported 00:08:22.895 Endurance Groups: Not Supported 00:08:22.895 Predictable Latency Mode: Not Supported 00:08:22.895 Traffic Based Keep ALive: Not Supported 00:08:22.895 Namespace Granularity: Not Supported 00:08:22.895 SQ Associations: Not Supported 00:08:22.895 UUID List: Not Supported 00:08:22.895 Multi-Domain Subsystem: Not Supported 00:08:22.895 Fixed Capacity Management: Not Supported 00:08:22.895 Variable Capacity Management: Not Supported 00:08:22.895 Delete Endurance Group: Not Supported 00:08:22.895 Delete NVM Set: Not Supported 00:08:22.895 Extended LBA Formats Supported: Supported 00:08:22.895 Flexible Data Placement Supported: Not Supported 00:08:22.895 00:08:22.895 Controller Memory Buffer Support 00:08:22.895 ================================ 00:08:22.895 Supported: No 00:08:22.895 00:08:22.895 Persistent Memory Region Support 00:08:22.895 ================================ 00:08:22.895 Supported: No 00:08:22.895 00:08:22.895 Admin Command Set Attributes 00:08:22.895 ============================ 00:08:22.895 Security Send/Receive: Not Supported 00:08:22.895 Format NVM: Supported 00:08:22.895 Firmware Activate/Download: Not Supported 00:08:22.895 Namespace Management: Supported 00:08:22.895 Device Self-Test: Not Supported 00:08:22.895 Directives: Supported 00:08:22.895 NVMe-MI: Not Supported 00:08:22.895 Virtualization Management: Not Supported 00:08:22.895 Doorbell Buffer Config: Supported 00:08:22.895 Get LBA Status Capability: Not Supported 00:08:22.895 Command & Feature Lockdown Capability: Not Supported 00:08:22.895 Abort Command Limit: 4 00:08:22.895 Async Event Request Limit: 4 00:08:22.895 Number of Firmware Slots: N/A 00:08:22.895 Firmware Slot 1 Read-Only: N/A 00:08:22.895 Firmware Activation Without Reset: N/A 00:08:22.895 Multiple Update Detection Support: N/A 00:08:22.895 Firmware Update Granularity: No Information Provided 00:08:22.895 Per-Namespace SMART Log: Yes 00:08:22.895 Asymmetric Namespace Access Log Page: Not Supported 00:08:22.895 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:22.895 Command Effects Log Page: Supported 00:08:22.895 Get Log Page Extended Data: Supported 00:08:22.895 Telemetry Log Pages: Not Supported 00:08:22.895 Persistent Event Log Pages: Not Supported 00:08:22.895 Supported Log Pages Log Page: May Support 00:08:22.895 Commands Supported & Effects Log Page: Not Supported 00:08:22.895 Feature Identifiers & Effects Log Page:May Support 00:08:22.895 NVMe-MI Commands & Effects Log Page: May Support 00:08:22.895 Data Area 4 for Telemetry Log: Not Supported 00:08:22.895 Error Log Page Entries Supported: 1 00:08:22.895 Keep Alive: Not Supported 00:08:22.895 00:08:22.895 NVM Command Set Attributes 00:08:22.895 ========================== 00:08:22.895 Submission Queue Entry Size 00:08:22.895 Max: 64 00:08:22.895 Min: 64 00:08:22.895 Completion Queue Entry Size 00:08:22.895 Max: 16 00:08:22.895 Min: 16 00:08:22.895 Number of Namespaces: 256 00:08:22.895 Compare Command: Supported 00:08:22.895 Write Uncorrectable Command: Not Supported 00:08:22.895 Dataset Management Command: Supported 00:08:22.895 Write Zeroes Command: Supported 00:08:22.895 Set Features Save Field: Supported 00:08:22.895 Reservations: Not Supported 00:08:22.895 Timestamp: Supported 00:08:22.895 Copy: Supported 00:08:22.895 Volatile Write Cache: Present 00:08:22.895 Atomic Write Unit (Normal): 1 00:08:22.895 Atomic Write Unit (PFail): 1 00:08:22.895 Atomic Compare & Write Unit: 1 00:08:22.895 Fused Compare & Write: Not Supported 00:08:22.895 Scatter-Gather List 00:08:22.895 SGL Command Set: Supported 00:08:22.895 SGL Keyed: Not Supported 00:08:22.895 SGL Bit Bucket Descriptor: Not Supported 00:08:22.895 SGL Metadata Pointer: Not Supported 00:08:22.895 Oversized SGL: Not Supported 00:08:22.895 SGL Metadata Address: Not Supported 00:08:22.895 SGL Offset: Not Supported 00:08:22.895 Transport SGL Data Block: Not Supported 00:08:22.895 Replay Protected Memory Block: Not Supported 00:08:22.895 00:08:22.895 Firmware Slot Information 00:08:22.895 ========================= 00:08:22.895 Active slot: 1 00:08:22.895 Slot 1 Firmware Revision: 1.0 00:08:22.895 00:08:22.895 00:08:22.895 Commands Supported and Effects 00:08:22.895 ============================== 00:08:22.895 Admin Commands 00:08:22.895 -------------- 00:08:22.895 Delete I/O Submission Queue (00h): Supported 00:08:22.895 Create I/O Submission Queue (01h): Supported 00:08:22.895 Get Log Page (02h): Supported 00:08:22.895 Delete I/O Completion Queue (04h): Supported 00:08:22.895 Create I/O Completion Queue (05h): Supported 00:08:22.895 Identify (06h): Supported 00:08:22.895 Abort (08h): Supported 00:08:22.895 Set Features (09h): Supported 00:08:22.895 Get Features (0Ah): Supported 00:08:22.895 Asynchronous Event Request (0Ch): Supported 00:08:22.895 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:22.896 Directive Send (19h): Supported 00:08:22.896 Directive Receive (1Ah): Supported 00:08:22.896 Virtualization Management (1Ch): Supported 00:08:22.896 Doorbell Buffer Config (7Ch): Supported 00:08:22.896 Format NVM (80h): Supported LBA-Change 00:08:22.896 I/O Commands 00:08:22.896 ------------ 00:08:22.896 Flush (00h): Supported LBA-Change 00:08:22.896 Write (01h): Supported LBA-Change 00:08:22.896 Read (02h): Supported 00:08:22.896 Compare (05h): Supported 00:08:22.896 Write Zeroes (08h): Supported LBA-Change 00:08:22.896 Dataset Management (09h): Supported LBA-Change 00:08:22.896 Unknown (0Ch): Supported 00:08:22.896 Unknown (12h): Supported 00:08:22.896 Copy (19h): Supported LBA-Change 00:08:22.896 Unknown (1Dh): Supported LBA-Change 00:08:22.896 00:08:22.896 Error Log 00:08:22.896 ========= 00:08:22.896 00:08:22.896 Arbitration 00:08:22.896 =========== 00:08:22.896 Arbitration Burst: no limit 00:08:22.896 00:08:22.896 Power Management 00:08:22.896 ================ 00:08:22.896 Number of Power States: 1 00:08:22.896 Current Power State: Power State #0 00:08:22.896 Power State #0: 00:08:22.896 Max Power: 25.00 W 00:08:22.896 Non-Operational State: Operational 00:08:22.896 Entry Latency: 16 microseconds 00:08:22.896 Exit Latency: 4 microseconds 00:08:22.896 Relative Read Throughput: 0 00:08:22.896 Relative Read Latency: 0 00:08:22.896 Relative Write Throughput: 0 00:08:22.896 Relative Write Latency: 0 00:08:22.896 Idle Power: Not Reported 00:08:22.896 Active Power: Not Reported 00:08:22.896 Non-Operational Permissive Mode: Not Supported 00:08:22.896 00:08:22.896 Health Information 00:08:22.896 ================== 00:08:22.896 Critical Warnings: 00:08:22.896 Available Spare Space: OK 00:08:22.896 Temperature: OK 00:08:22.896 Device Reliability: OK 00:08:22.896 Read Only: No 00:08:22.896 Volatile Memory Backup: OK 00:08:22.896 Current Temperature: 323 Kelvin (50 Celsius) 00:08:22.896 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:22.896 Available Spare: 0% 00:08:22.896 Available Spare Threshold: 0% 00:08:22.896 Life Percentage Used: 0% 00:08:22.896 Data Units Read: 2299 00:08:22.896 Data Units Written: 2087 00:08:22.896 Host Read Commands: 118704 00:08:22.896 Host Write Commands: 116973 00:08:22.896 Controller Busy Time: 0 minutes 00:08:22.896 Power Cycles: 0 00:08:22.896 Power On Hours: 0 hours 00:08:22.896 Unsafe Shutdowns: 0 00:08:22.896 Unrecoverable Media Errors: 0 00:08:22.896 Lifetime Error Log Entries: 0 00:08:22.896 Warning Temperature Time: 0 minutes 00:08:22.896 Critical Temperature Time: 0 minutes 00:08:22.896 00:08:22.896 Number of Queues 00:08:22.896 ================ 00:08:22.896 Number of I/O Submission Queues: 64 00:08:22.896 Number of I/O Completion Queues: 64 00:08:22.896 00:08:22.896 ZNS Specific Controller Data 00:08:22.896 ============================ 00:08:22.896 Zone Append Size Limit: 0 00:08:22.896 00:08:22.896 00:08:22.896 Active Namespaces 00:08:22.896 ================= 00:08:22.896 Namespace ID:1 00:08:22.896 Error Recovery Timeout: Unlimited 00:08:22.896 Command Set Identifier: NVM (00h) 00:08:22.896 Deallocate: Supported 00:08:22.896 Deallocated/Unwritten Error: Supported 00:08:22.896 Deallocated Read Value: All 0x00 00:08:22.896 Deallocate in Write Zeroes: Not Supported 00:08:22.896 Deallocated Guard Field: 0xFFFF 00:08:22.896 Flush: Supported 00:08:22.896 Reservation: Not Supported 00:08:22.896 Namespace Sharing Capabilities: Private 00:08:22.896 Size (in LBAs): 1048576 (4GiB) 00:08:22.896 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.896 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.896 Thin Provisioning: Not Supported 00:08:22.896 Per-NS Atomic Units: No 00:08:22.896 Maximum Single Source Range Length: 128 00:08:22.896 Maximum Copy Length: 128 00:08:22.896 Maximum Source Range Count: 128 00:08:22.896 NGUID/EUI64 Never Reused: No 00:08:22.896 Namespace Write Protected: No 00:08:22.896 Number of LBA Formats: 8 00:08:22.896 Current LBA Format: LBA Format #04 00:08:22.896 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.896 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.896 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.896 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.896 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.896 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.896 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.896 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.896 00:08:22.896 NVM Specific Namespace Data 00:08:22.896 =========================== 00:08:22.896 Logical Block Storage Tag Mask: 0 00:08:22.896 Protection Information Capabilities: 00:08:22.896 16b Guard Protection Information Storage Tag Support: No 00:08:22.896 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.896 Storage Tag Check Read Support: No 00:08:22.896 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.896 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.896 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Namespace ID:2 00:08:22.897 Error Recovery Timeout: Unlimited 00:08:22.897 Command Set Identifier: NVM (00h) 00:08:22.897 Deallocate: Supported 00:08:22.897 Deallocated/Unwritten Error: Supported 00:08:22.897 Deallocated Read Value: All 0x00 00:08:22.897 Deallocate in Write Zeroes: Not Supported 00:08:22.897 Deallocated Guard Field: 0xFFFF 00:08:22.897 Flush: Supported 00:08:22.897 Reservation: Not Supported 00:08:22.897 Namespace Sharing Capabilities: Private 00:08:22.897 Size (in LBAs): 1048576 (4GiB) 00:08:22.897 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.897 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.897 Thin Provisioning: Not Supported 00:08:22.897 Per-NS Atomic Units: No 00:08:22.897 Maximum Single Source Range Length: 128 00:08:22.897 Maximum Copy Length: 128 00:08:22.897 Maximum Source Range Count: 128 00:08:22.897 NGUID/EUI64 Never Reused: No 00:08:22.897 Namespace Write Protected: No 00:08:22.897 Number of LBA Formats: 8 00:08:22.897 Current LBA Format: LBA Format #04 00:08:22.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.897 00:08:22.897 NVM Specific Namespace Data 00:08:22.897 =========================== 00:08:22.897 Logical Block Storage Tag Mask: 0 00:08:22.897 Protection Information Capabilities: 00:08:22.897 16b Guard Protection Information Storage Tag Support: No 00:08:22.897 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.897 Storage Tag Check Read Support: No 00:08:22.897 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Namespace ID:3 00:08:22.897 Error Recovery Timeout: Unlimited 00:08:22.897 Command Set Identifier: NVM (00h) 00:08:22.897 Deallocate: Supported 00:08:22.897 Deallocated/Unwritten Error: Supported 00:08:22.897 Deallocated Read Value: All 0x00 00:08:22.897 Deallocate in Write Zeroes: Not Supported 00:08:22.897 Deallocated Guard Field: 0xFFFF 00:08:22.897 Flush: Supported 00:08:22.897 Reservation: Not Supported 00:08:22.897 Namespace Sharing Capabilities: Private 00:08:22.897 Size (in LBAs): 1048576 (4GiB) 00:08:22.897 Capacity (in LBAs): 1048576 (4GiB) 00:08:22.897 Utilization (in LBAs): 1048576 (4GiB) 00:08:22.897 Thin Provisioning: Not Supported 00:08:22.897 Per-NS Atomic Units: No 00:08:22.897 Maximum Single Source Range Length: 128 00:08:22.897 Maximum Copy Length: 128 00:08:22.897 Maximum Source Range Count: 128 00:08:22.897 NGUID/EUI64 Never Reused: No 00:08:22.897 Namespace Write Protected: No 00:08:22.897 Number of LBA Formats: 8 00:08:22.897 Current LBA Format: LBA Format #04 00:08:22.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:22.897 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:22.897 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:22.897 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:22.897 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:22.897 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:22.897 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:22.897 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:22.897 00:08:22.897 NVM Specific Namespace Data 00:08:22.897 =========================== 00:08:22.897 Logical Block Storage Tag Mask: 0 00:08:22.897 Protection Information Capabilities: 00:08:22.897 16b Guard Protection Information Storage Tag Support: No 00:08:22.897 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:22.897 Storage Tag Check Read Support: No 00:08:22.897 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:22.897 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:22.897 18:19:11 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:23.161 ===================================================== 00:08:23.161 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:23.161 ===================================================== 00:08:23.161 Controller Capabilities/Features 00:08:23.161 ================================ 00:08:23.161 Vendor ID: 1b36 00:08:23.161 Subsystem Vendor ID: 1af4 00:08:23.161 Serial Number: 12343 00:08:23.161 Model Number: QEMU NVMe Ctrl 00:08:23.161 Firmware Version: 8.0.0 00:08:23.161 Recommended Arb Burst: 6 00:08:23.161 IEEE OUI Identifier: 00 54 52 00:08:23.161 Multi-path I/O 00:08:23.161 May have multiple subsystem ports: No 00:08:23.161 May have multiple controllers: Yes 00:08:23.161 Associated with SR-IOV VF: No 00:08:23.161 Max Data Transfer Size: 524288 00:08:23.161 Max Number of Namespaces: 256 00:08:23.161 Max Number of I/O Queues: 64 00:08:23.161 NVMe Specification Version (VS): 1.4 00:08:23.161 NVMe Specification Version (Identify): 1.4 00:08:23.161 Maximum Queue Entries: 2048 00:08:23.161 Contiguous Queues Required: Yes 00:08:23.161 Arbitration Mechanisms Supported 00:08:23.161 Weighted Round Robin: Not Supported 00:08:23.161 Vendor Specific: Not Supported 00:08:23.161 Reset Timeout: 7500 ms 00:08:23.161 Doorbell Stride: 4 bytes 00:08:23.162 NVM Subsystem Reset: Not Supported 00:08:23.162 Command Sets Supported 00:08:23.162 NVM Command Set: Supported 00:08:23.162 Boot Partition: Not Supported 00:08:23.162 Memory Page Size Minimum: 4096 bytes 00:08:23.162 Memory Page Size Maximum: 65536 bytes 00:08:23.162 Persistent Memory Region: Not Supported 00:08:23.162 Optional Asynchronous Events Supported 00:08:23.162 Namespace Attribute Notices: Supported 00:08:23.162 Firmware Activation Notices: Not Supported 00:08:23.162 ANA Change Notices: Not Supported 00:08:23.162 PLE Aggregate Log Change Notices: Not Supported 00:08:23.162 LBA Status Info Alert Notices: Not Supported 00:08:23.162 EGE Aggregate Log Change Notices: Not Supported 00:08:23.162 Normal NVM Subsystem Shutdown event: Not Supported 00:08:23.162 Zone Descriptor Change Notices: Not Supported 00:08:23.162 Discovery Log Change Notices: Not Supported 00:08:23.162 Controller Attributes 00:08:23.162 128-bit Host Identifier: Not Supported 00:08:23.162 Non-Operational Permissive Mode: Not Supported 00:08:23.162 NVM Sets: Not Supported 00:08:23.162 Read Recovery Levels: Not Supported 00:08:23.162 Endurance Groups: Supported 00:08:23.162 Predictable Latency Mode: Not Supported 00:08:23.162 Traffic Based Keep ALive: Not Supported 00:08:23.162 Namespace Granularity: Not Supported 00:08:23.162 SQ Associations: Not Supported 00:08:23.162 UUID List: Not Supported 00:08:23.162 Multi-Domain Subsystem: Not Supported 00:08:23.162 Fixed Capacity Management: Not Supported 00:08:23.162 Variable Capacity Management: Not Supported 00:08:23.162 Delete Endurance Group: Not Supported 00:08:23.162 Delete NVM Set: Not Supported 00:08:23.162 Extended LBA Formats Supported: Supported 00:08:23.162 Flexible Data Placement Supported: Supported 00:08:23.162 00:08:23.162 Controller Memory Buffer Support 00:08:23.162 ================================ 00:08:23.162 Supported: No 00:08:23.162 00:08:23.162 Persistent Memory Region Support 00:08:23.162 ================================ 00:08:23.162 Supported: No 00:08:23.162 00:08:23.162 Admin Command Set Attributes 00:08:23.162 ============================ 00:08:23.162 Security Send/Receive: Not Supported 00:08:23.162 Format NVM: Supported 00:08:23.162 Firmware Activate/Download: Not Supported 00:08:23.162 Namespace Management: Supported 00:08:23.162 Device Self-Test: Not Supported 00:08:23.162 Directives: Supported 00:08:23.162 NVMe-MI: Not Supported 00:08:23.162 Virtualization Management: Not Supported 00:08:23.162 Doorbell Buffer Config: Supported 00:08:23.162 Get LBA Status Capability: Not Supported 00:08:23.162 Command & Feature Lockdown Capability: Not Supported 00:08:23.162 Abort Command Limit: 4 00:08:23.162 Async Event Request Limit: 4 00:08:23.162 Number of Firmware Slots: N/A 00:08:23.162 Firmware Slot 1 Read-Only: N/A 00:08:23.162 Firmware Activation Without Reset: N/A 00:08:23.162 Multiple Update Detection Support: N/A 00:08:23.162 Firmware Update Granularity: No Information Provided 00:08:23.162 Per-Namespace SMART Log: Yes 00:08:23.162 Asymmetric Namespace Access Log Page: Not Supported 00:08:23.162 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:23.162 Command Effects Log Page: Supported 00:08:23.162 Get Log Page Extended Data: Supported 00:08:23.162 Telemetry Log Pages: Not Supported 00:08:23.162 Persistent Event Log Pages: Not Supported 00:08:23.162 Supported Log Pages Log Page: May Support 00:08:23.162 Commands Supported & Effects Log Page: Not Supported 00:08:23.162 Feature Identifiers & Effects Log Page:May Support 00:08:23.162 NVMe-MI Commands & Effects Log Page: May Support 00:08:23.162 Data Area 4 for Telemetry Log: Not Supported 00:08:23.162 Error Log Page Entries Supported: 1 00:08:23.162 Keep Alive: Not Supported 00:08:23.162 00:08:23.162 NVM Command Set Attributes 00:08:23.162 ========================== 00:08:23.162 Submission Queue Entry Size 00:08:23.162 Max: 64 00:08:23.162 Min: 64 00:08:23.162 Completion Queue Entry Size 00:08:23.162 Max: 16 00:08:23.162 Min: 16 00:08:23.162 Number of Namespaces: 256 00:08:23.162 Compare Command: Supported 00:08:23.162 Write Uncorrectable Command: Not Supported 00:08:23.162 Dataset Management Command: Supported 00:08:23.162 Write Zeroes Command: Supported 00:08:23.162 Set Features Save Field: Supported 00:08:23.162 Reservations: Not Supported 00:08:23.162 Timestamp: Supported 00:08:23.162 Copy: Supported 00:08:23.162 Volatile Write Cache: Present 00:08:23.162 Atomic Write Unit (Normal): 1 00:08:23.162 Atomic Write Unit (PFail): 1 00:08:23.162 Atomic Compare & Write Unit: 1 00:08:23.162 Fused Compare & Write: Not Supported 00:08:23.162 Scatter-Gather List 00:08:23.162 SGL Command Set: Supported 00:08:23.162 SGL Keyed: Not Supported 00:08:23.162 SGL Bit Bucket Descriptor: Not Supported 00:08:23.162 SGL Metadata Pointer: Not Supported 00:08:23.162 Oversized SGL: Not Supported 00:08:23.162 SGL Metadata Address: Not Supported 00:08:23.162 SGL Offset: Not Supported 00:08:23.162 Transport SGL Data Block: Not Supported 00:08:23.162 Replay Protected Memory Block: Not Supported 00:08:23.162 00:08:23.162 Firmware Slot Information 00:08:23.162 ========================= 00:08:23.162 Active slot: 1 00:08:23.162 Slot 1 Firmware Revision: 1.0 00:08:23.162 00:08:23.162 00:08:23.162 Commands Supported and Effects 00:08:23.162 ============================== 00:08:23.162 Admin Commands 00:08:23.162 -------------- 00:08:23.162 Delete I/O Submission Queue (00h): Supported 00:08:23.162 Create I/O Submission Queue (01h): Supported 00:08:23.162 Get Log Page (02h): Supported 00:08:23.163 Delete I/O Completion Queue (04h): Supported 00:08:23.163 Create I/O Completion Queue (05h): Supported 00:08:23.163 Identify (06h): Supported 00:08:23.163 Abort (08h): Supported 00:08:23.163 Set Features (09h): Supported 00:08:23.163 Get Features (0Ah): Supported 00:08:23.163 Asynchronous Event Request (0Ch): Supported 00:08:23.163 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:23.163 Directive Send (19h): Supported 00:08:23.163 Directive Receive (1Ah): Supported 00:08:23.163 Virtualization Management (1Ch): Supported 00:08:23.163 Doorbell Buffer Config (7Ch): Supported 00:08:23.163 Format NVM (80h): Supported LBA-Change 00:08:23.163 I/O Commands 00:08:23.163 ------------ 00:08:23.163 Flush (00h): Supported LBA-Change 00:08:23.163 Write (01h): Supported LBA-Change 00:08:23.163 Read (02h): Supported 00:08:23.163 Compare (05h): Supported 00:08:23.163 Write Zeroes (08h): Supported LBA-Change 00:08:23.163 Dataset Management (09h): Supported LBA-Change 00:08:23.163 Unknown (0Ch): Supported 00:08:23.163 Unknown (12h): Supported 00:08:23.163 Copy (19h): Supported LBA-Change 00:08:23.163 Unknown (1Dh): Supported LBA-Change 00:08:23.163 00:08:23.163 Error Log 00:08:23.163 ========= 00:08:23.163 00:08:23.163 Arbitration 00:08:23.163 =========== 00:08:23.163 Arbitration Burst: no limit 00:08:23.163 00:08:23.163 Power Management 00:08:23.163 ================ 00:08:23.163 Number of Power States: 1 00:08:23.163 Current Power State: Power State #0 00:08:23.163 Power State #0: 00:08:23.163 Max Power: 25.00 W 00:08:23.163 Non-Operational State: Operational 00:08:23.163 Entry Latency: 16 microseconds 00:08:23.163 Exit Latency: 4 microseconds 00:08:23.163 Relative Read Throughput: 0 00:08:23.163 Relative Read Latency: 0 00:08:23.163 Relative Write Throughput: 0 00:08:23.163 Relative Write Latency: 0 00:08:23.163 Idle Power: Not Reported 00:08:23.163 Active Power: Not Reported 00:08:23.163 Non-Operational Permissive Mode: Not Supported 00:08:23.163 00:08:23.163 Health Information 00:08:23.163 ================== 00:08:23.163 Critical Warnings: 00:08:23.163 Available Spare Space: OK 00:08:23.163 Temperature: OK 00:08:23.163 Device Reliability: OK 00:08:23.163 Read Only: No 00:08:23.163 Volatile Memory Backup: OK 00:08:23.163 Current Temperature: 323 Kelvin (50 Celsius) 00:08:23.163 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:23.163 Available Spare: 0% 00:08:23.163 Available Spare Threshold: 0% 00:08:23.163 Life Percentage Used: 0% 00:08:23.163 Data Units Read: 977 00:08:23.163 Data Units Written: 906 00:08:23.163 Host Read Commands: 41250 00:08:23.163 Host Write Commands: 40673 00:08:23.163 Controller Busy Time: 0 minutes 00:08:23.163 Power Cycles: 0 00:08:23.163 Power On Hours: 0 hours 00:08:23.163 Unsafe Shutdowns: 0 00:08:23.163 Unrecoverable Media Errors: 0 00:08:23.163 Lifetime Error Log Entries: 0 00:08:23.163 Warning Temperature Time: 0 minutes 00:08:23.163 Critical Temperature Time: 0 minutes 00:08:23.163 00:08:23.163 Number of Queues 00:08:23.163 ================ 00:08:23.163 Number of I/O Submission Queues: 64 00:08:23.163 Number of I/O Completion Queues: 64 00:08:23.163 00:08:23.163 ZNS Specific Controller Data 00:08:23.163 ============================ 00:08:23.163 Zone Append Size Limit: 0 00:08:23.163 00:08:23.163 00:08:23.163 Active Namespaces 00:08:23.163 ================= 00:08:23.163 Namespace ID:1 00:08:23.163 Error Recovery Timeout: Unlimited 00:08:23.163 Command Set Identifier: NVM (00h) 00:08:23.163 Deallocate: Supported 00:08:23.163 Deallocated/Unwritten Error: Supported 00:08:23.163 Deallocated Read Value: All 0x00 00:08:23.163 Deallocate in Write Zeroes: Not Supported 00:08:23.163 Deallocated Guard Field: 0xFFFF 00:08:23.163 Flush: Supported 00:08:23.163 Reservation: Not Supported 00:08:23.163 Namespace Sharing Capabilities: Multiple Controllers 00:08:23.163 Size (in LBAs): 262144 (1GiB) 00:08:23.163 Capacity (in LBAs): 262144 (1GiB) 00:08:23.163 Utilization (in LBAs): 262144 (1GiB) 00:08:23.163 Thin Provisioning: Not Supported 00:08:23.163 Per-NS Atomic Units: No 00:08:23.163 Maximum Single Source Range Length: 128 00:08:23.163 Maximum Copy Length: 128 00:08:23.163 Maximum Source Range Count: 128 00:08:23.163 NGUID/EUI64 Never Reused: No 00:08:23.163 Namespace Write Protected: No 00:08:23.163 Endurance group ID: 1 00:08:23.163 Number of LBA Formats: 8 00:08:23.163 Current LBA Format: LBA Format #04 00:08:23.163 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:23.163 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:23.163 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:23.163 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:23.163 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:23.163 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:23.163 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:23.163 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:23.163 00:08:23.163 Get Feature FDP: 00:08:23.163 ================ 00:08:23.163 Enabled: Yes 00:08:23.163 FDP configuration index: 0 00:08:23.163 00:08:23.163 FDP configurations log page 00:08:23.163 =========================== 00:08:23.163 Number of FDP configurations: 1 00:08:23.163 Version: 0 00:08:23.163 Size: 112 00:08:23.163 FDP Configuration Descriptor: 0 00:08:23.164 Descriptor Size: 96 00:08:23.164 Reclaim Group Identifier format: 2 00:08:23.164 FDP Volatile Write Cache: Not Present 00:08:23.164 FDP Configuration: Valid 00:08:23.164 Vendor Specific Size: 0 00:08:23.164 Number of Reclaim Groups: 2 00:08:23.164 Number of Recalim Unit Handles: 8 00:08:23.164 Max Placement Identifiers: 128 00:08:23.164 Number of Namespaces Suppprted: 256 00:08:23.164 Reclaim unit Nominal Size: 6000000 bytes 00:08:23.164 Estimated Reclaim Unit Time Limit: Not Reported 00:08:23.164 RUH Desc #000: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #001: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #002: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #003: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #004: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #005: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #006: RUH Type: Initially Isolated 00:08:23.164 RUH Desc #007: RUH Type: Initially Isolated 00:08:23.164 00:08:23.164 FDP reclaim unit handle usage log page 00:08:23.164 ====================================== 00:08:23.164 Number of Reclaim Unit Handles: 8 00:08:23.164 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:23.164 RUH Usage Desc #001: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #002: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #003: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #004: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #005: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #006: RUH Attributes: Unused 00:08:23.164 RUH Usage Desc #007: RUH Attributes: Unused 00:08:23.164 00:08:23.164 FDP statistics log page 00:08:23.164 ======================= 00:08:23.164 Host bytes with metadata written: 556179456 00:08:23.164 Media bytes with metadata written: 556257280 00:08:23.164 Media bytes erased: 0 00:08:23.164 00:08:23.164 FDP events log page 00:08:23.164 =================== 00:08:23.164 Number of FDP events: 0 00:08:23.164 00:08:23.164 NVM Specific Namespace Data 00:08:23.164 =========================== 00:08:23.164 Logical Block Storage Tag Mask: 0 00:08:23.164 Protection Information Capabilities: 00:08:23.164 16b Guard Protection Information Storage Tag Support: No 00:08:23.164 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:23.164 Storage Tag Check Read Support: No 00:08:23.164 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:23.164 00:08:23.164 real 0m1.204s 00:08:23.164 user 0m0.377s 00:08:23.164 sys 0m0.592s 00:08:23.164 18:19:11 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.164 18:19:11 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:23.164 ************************************ 00:08:23.164 END TEST nvme_identify 00:08:23.164 ************************************ 00:08:23.164 18:19:11 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:23.164 18:19:11 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:23.164 18:19:11 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:23.164 18:19:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.164 ************************************ 00:08:23.164 START TEST nvme_perf 00:08:23.164 ************************************ 00:08:23.164 18:19:11 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:23.164 18:19:11 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:24.558 Initializing NVMe Controllers 00:08:24.559 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.559 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.559 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.559 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.559 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:24.559 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:24.559 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:24.559 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:24.559 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:24.559 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:24.559 Initialization complete. Launching workers. 00:08:24.559 ======================================================== 00:08:24.559 Latency(us) 00:08:24.559 Device Information : IOPS MiB/s Average min max 00:08:24.559 PCIE (0000:00:10.0) NSID 1 from core 0: 7512.76 88.04 17046.88 11385.07 42234.87 00:08:24.559 PCIE (0000:00:11.0) NSID 1 from core 0: 7512.76 88.04 17027.71 9936.83 41152.75 00:08:24.559 PCIE (0000:00:13.0) NSID 1 from core 0: 7512.76 88.04 17003.78 7506.85 41562.45 00:08:24.559 PCIE (0000:00:12.0) NSID 1 from core 0: 7512.76 88.04 16978.58 6426.43 40832.37 00:08:24.559 PCIE (0000:00:12.0) NSID 2 from core 0: 7512.76 88.04 16953.59 5440.38 40220.68 00:08:24.559 PCIE (0000:00:12.0) NSID 3 from core 0: 7576.43 88.79 16786.47 4764.25 32142.40 00:08:24.559 ======================================================== 00:08:24.559 Total : 45140.23 528.99 16965.91 4764.25 42234.87 00:08:24.559 00:08:24.559 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.559 ================================================================================= 00:08:24.559 1.00000% : 13208.025us 00:08:24.559 10.00000% : 14720.394us 00:08:24.559 25.00000% : 15426.166us 00:08:24.559 50.00000% : 16636.062us 00:08:24.559 75.00000% : 17946.782us 00:08:24.559 90.00000% : 19257.502us 00:08:24.559 95.00000% : 20366.572us 00:08:24.559 98.00000% : 22584.714us 00:08:24.559 99.00000% : 33675.422us 00:08:24.559 99.50000% : 41338.092us 00:08:24.559 99.90000% : 42144.689us 00:08:24.559 99.99000% : 42346.338us 00:08:24.559 99.99900% : 42346.338us 00:08:24.559 99.99990% : 42346.338us 00:08:24.559 99.99999% : 42346.338us 00:08:24.559 00:08:24.559 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.559 ================================================================================= 00:08:24.559 1.00000% : 13510.498us 00:08:24.559 10.00000% : 14720.394us 00:08:24.559 25.00000% : 15426.166us 00:08:24.559 50.00000% : 16636.062us 00:08:24.559 75.00000% : 18047.606us 00:08:24.559 90.00000% : 19257.502us 00:08:24.559 95.00000% : 20366.572us 00:08:24.559 98.00000% : 22584.714us 00:08:24.559 99.00000% : 32868.825us 00:08:24.559 99.50000% : 40329.846us 00:08:24.559 99.90000% : 41136.443us 00:08:24.559 99.99000% : 41338.092us 00:08:24.559 99.99900% : 41338.092us 00:08:24.559 99.99990% : 41338.092us 00:08:24.559 99.99999% : 41338.092us 00:08:24.559 00:08:24.559 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.559 ================================================================================= 00:08:24.559 1.00000% : 13409.674us 00:08:24.559 10.00000% : 14821.218us 00:08:24.559 25.00000% : 15426.166us 00:08:24.559 50.00000% : 16535.237us 00:08:24.559 75.00000% : 17946.782us 00:08:24.559 90.00000% : 19358.326us 00:08:24.559 95.00000% : 20164.923us 00:08:24.559 98.00000% : 22584.714us 00:08:24.559 99.00000% : 33272.123us 00:08:24.559 99.50000% : 40733.145us 00:08:24.559 99.90000% : 41539.742us 00:08:24.559 99.99000% : 41741.391us 00:08:24.559 99.99900% : 41741.391us 00:08:24.559 99.99990% : 41741.391us 00:08:24.559 99.99999% : 41741.391us 00:08:24.559 00:08:24.559 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.559 ================================================================================= 00:08:24.559 1.00000% : 13510.498us 00:08:24.559 10.00000% : 14821.218us 00:08:24.559 25.00000% : 15426.166us 00:08:24.559 50.00000% : 16636.062us 00:08:24.559 75.00000% : 17946.782us 00:08:24.559 90.00000% : 19257.502us 00:08:24.559 95.00000% : 20164.923us 00:08:24.559 98.00000% : 21979.766us 00:08:24.559 99.00000% : 32465.526us 00:08:24.559 99.50000% : 39926.548us 00:08:24.559 99.90000% : 40733.145us 00:08:24.559 99.99000% : 40934.794us 00:08:24.559 99.99900% : 40934.794us 00:08:24.559 99.99990% : 40934.794us 00:08:24.559 99.99999% : 40934.794us 00:08:24.559 00:08:24.559 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.559 ================================================================================= 00:08:24.559 1.00000% : 13308.849us 00:08:24.559 10.00000% : 14720.394us 00:08:24.559 25.00000% : 15426.166us 00:08:24.559 50.00000% : 16636.062us 00:08:24.559 75.00000% : 17946.782us 00:08:24.559 90.00000% : 19257.502us 00:08:24.559 95.00000% : 20265.748us 00:08:24.560 98.00000% : 21677.292us 00:08:24.560 99.00000% : 31860.578us 00:08:24.560 99.50000% : 39523.249us 00:08:24.560 99.90000% : 40128.197us 00:08:24.560 99.99000% : 40329.846us 00:08:24.560 99.99900% : 40329.846us 00:08:24.560 99.99990% : 40329.846us 00:08:24.560 99.99999% : 40329.846us 00:08:24.560 00:08:24.560 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.560 ================================================================================= 00:08:24.560 1.00000% : 13208.025us 00:08:24.560 10.00000% : 14720.394us 00:08:24.560 25.00000% : 15325.342us 00:08:24.560 50.00000% : 16636.062us 00:08:24.560 75.00000% : 17946.782us 00:08:24.560 90.00000% : 19156.677us 00:08:24.560 95.00000% : 20265.748us 00:08:24.560 98.00000% : 22383.065us 00:08:24.560 99.00000% : 23794.609us 00:08:24.560 99.50000% : 31457.280us 00:08:24.560 99.90000% : 32062.228us 00:08:24.560 99.99000% : 32263.877us 00:08:24.560 99.99900% : 32263.877us 00:08:24.560 99.99990% : 32263.877us 00:08:24.560 99.99999% : 32263.877us 00:08:24.560 00:08:24.560 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:24.560 ============================================================================== 00:08:24.560 Range in us Cumulative IO count 00:08:24.560 11342.769 - 11393.182: 0.0132% ( 1) 00:08:24.560 11393.182 - 11443.594: 0.0265% ( 1) 00:08:24.560 11443.594 - 11494.006: 0.0927% ( 5) 00:08:24.560 11494.006 - 11544.418: 0.1192% ( 2) 00:08:24.560 11544.418 - 11594.831: 0.1457% ( 2) 00:08:24.560 11594.831 - 11645.243: 0.1986% ( 4) 00:08:24.560 11645.243 - 11695.655: 0.2383% ( 3) 00:08:24.560 11695.655 - 11746.068: 0.2516% ( 1) 00:08:24.560 11746.068 - 11796.480: 0.3178% ( 5) 00:08:24.560 11796.480 - 11846.892: 0.3310% ( 1) 00:08:24.560 11846.892 - 11897.305: 0.3708% ( 3) 00:08:24.560 11897.305 - 11947.717: 0.4237% ( 4) 00:08:24.560 11947.717 - 11998.129: 0.4635% ( 3) 00:08:24.560 11998.129 - 12048.542: 0.5032% ( 3) 00:08:24.560 12048.542 - 12098.954: 0.5429% ( 3) 00:08:24.560 12098.954 - 12149.366: 0.5826% ( 3) 00:08:24.560 12149.366 - 12199.778: 0.6224% ( 3) 00:08:24.560 12199.778 - 12250.191: 0.6753% ( 4) 00:08:24.560 12250.191 - 12300.603: 0.7018% ( 2) 00:08:24.560 12300.603 - 12351.015: 0.7548% ( 4) 00:08:24.560 12351.015 - 12401.428: 0.7945% ( 3) 00:08:24.560 12401.428 - 12451.840: 0.8342% ( 3) 00:08:24.560 12451.840 - 12502.252: 0.8475% ( 1) 00:08:24.560 12855.138 - 12905.551: 0.8739% ( 2) 00:08:24.560 12905.551 - 13006.375: 0.9137% ( 3) 00:08:24.560 13006.375 - 13107.200: 0.9666% ( 4) 00:08:24.560 13107.200 - 13208.025: 1.0064% ( 3) 00:08:24.560 13208.025 - 13308.849: 1.1123% ( 8) 00:08:24.560 13308.849 - 13409.674: 1.2182% ( 8) 00:08:24.560 13409.674 - 13510.498: 1.3109% ( 7) 00:08:24.560 13510.498 - 13611.323: 1.4036% ( 7) 00:08:24.560 13611.323 - 13712.148: 1.4963% ( 7) 00:08:24.560 13712.148 - 13812.972: 1.7214% ( 17) 00:08:24.560 13812.972 - 13913.797: 2.0922% ( 28) 00:08:24.560 13913.797 - 14014.622: 2.4497% ( 27) 00:08:24.560 14014.622 - 14115.446: 3.2707% ( 62) 00:08:24.560 14115.446 - 14216.271: 4.1049% ( 63) 00:08:24.560 14216.271 - 14317.095: 5.2436% ( 86) 00:08:24.560 14317.095 - 14417.920: 6.5413% ( 98) 00:08:24.560 14417.920 - 14518.745: 7.8919% ( 102) 00:08:24.560 14518.745 - 14619.569: 9.2029% ( 99) 00:08:24.560 14619.569 - 14720.394: 11.0037% ( 136) 00:08:24.560 14720.394 - 14821.218: 12.6721% ( 126) 00:08:24.560 14821.218 - 14922.043: 14.7378% ( 156) 00:08:24.560 14922.043 - 15022.868: 16.8300% ( 158) 00:08:24.560 15022.868 - 15123.692: 19.3856% ( 193) 00:08:24.560 15123.692 - 15224.517: 21.8750% ( 188) 00:08:24.560 15224.517 - 15325.342: 24.2188% ( 177) 00:08:24.560 15325.342 - 15426.166: 26.7346% ( 190) 00:08:24.560 15426.166 - 15526.991: 29.2903% ( 193) 00:08:24.560 15526.991 - 15627.815: 31.2500% ( 148) 00:08:24.560 15627.815 - 15728.640: 33.3289% ( 157) 00:08:24.560 15728.640 - 15829.465: 35.0768% ( 132) 00:08:24.560 15829.465 - 15930.289: 37.3808% ( 174) 00:08:24.560 15930.289 - 16031.114: 39.1287% ( 132) 00:08:24.560 16031.114 - 16131.938: 41.0222% ( 143) 00:08:24.560 16131.938 - 16232.763: 43.2071% ( 165) 00:08:24.560 16232.763 - 16333.588: 44.9285% ( 130) 00:08:24.560 16333.588 - 16434.412: 46.8882% ( 148) 00:08:24.560 16434.412 - 16535.237: 48.7421% ( 140) 00:08:24.560 16535.237 - 16636.062: 50.5826% ( 139) 00:08:24.560 16636.062 - 16736.886: 52.4232% ( 139) 00:08:24.560 16736.886 - 16837.711: 54.6478% ( 168) 00:08:24.560 16837.711 - 16938.535: 56.5016% ( 140) 00:08:24.560 16938.535 - 17039.360: 58.5540% ( 155) 00:08:24.560 17039.360 - 17140.185: 60.3416% ( 135) 00:08:24.560 17140.185 - 17241.009: 62.3808% ( 154) 00:08:24.560 17241.009 - 17341.834: 64.2346% ( 140) 00:08:24.560 17341.834 - 17442.658: 66.3400% ( 159) 00:08:24.560 17442.658 - 17543.483: 68.3660% ( 153) 00:08:24.560 17543.483 - 17644.308: 70.4714% ( 159) 00:08:24.560 17644.308 - 17745.132: 72.2987% ( 138) 00:08:24.560 17745.132 - 17845.957: 74.1790% ( 142) 00:08:24.560 17845.957 - 17946.782: 76.0990% ( 145) 00:08:24.560 17946.782 - 18047.606: 77.7278% ( 123) 00:08:24.560 18047.606 - 18148.431: 79.1711% ( 109) 00:08:24.560 18148.431 - 18249.255: 80.4555% ( 97) 00:08:24.560 18249.255 - 18350.080: 81.8724% ( 107) 00:08:24.560 18350.080 - 18450.905: 83.0773% ( 91) 00:08:24.560 18450.905 - 18551.729: 84.3485% ( 96) 00:08:24.560 18551.729 - 18652.554: 85.2357% ( 67) 00:08:24.560 18652.554 - 18753.378: 86.3612% ( 85) 00:08:24.560 18753.378 - 18854.203: 87.3279% ( 73) 00:08:24.560 18854.203 - 18955.028: 88.1753% ( 64) 00:08:24.560 18955.028 - 19055.852: 88.8904% ( 54) 00:08:24.560 19055.852 - 19156.677: 89.4995% ( 46) 00:08:24.560 19156.677 - 19257.502: 90.1748% ( 51) 00:08:24.561 19257.502 - 19358.326: 90.8633% ( 52) 00:08:24.561 19358.326 - 19459.151: 91.2209% ( 27) 00:08:24.561 19459.151 - 19559.975: 91.8829% ( 50) 00:08:24.561 19559.975 - 19660.800: 92.3596% ( 36) 00:08:24.561 19660.800 - 19761.625: 92.7966% ( 33) 00:08:24.561 19761.625 - 19862.449: 93.2865% ( 37) 00:08:24.561 19862.449 - 19963.274: 93.6176% ( 25) 00:08:24.561 19963.274 - 20064.098: 93.9486% ( 25) 00:08:24.561 20064.098 - 20164.923: 94.3061% ( 27) 00:08:24.561 20164.923 - 20265.748: 94.5842% ( 21) 00:08:24.561 20265.748 - 20366.572: 95.0609% ( 36) 00:08:24.561 20366.572 - 20467.397: 95.4846% ( 32) 00:08:24.561 20467.397 - 20568.222: 95.8686% ( 29) 00:08:24.561 20568.222 - 20669.046: 96.1467% ( 21) 00:08:24.561 20669.046 - 20769.871: 96.3718% ( 17) 00:08:24.561 20769.871 - 20870.695: 96.5969% ( 17) 00:08:24.561 20870.695 - 20971.520: 96.7293% ( 10) 00:08:24.561 20971.520 - 21072.345: 96.8485% ( 9) 00:08:24.561 21072.345 - 21173.169: 96.9412% ( 7) 00:08:24.561 21173.169 - 21273.994: 97.0471% ( 8) 00:08:24.561 21273.994 - 21374.818: 97.0736% ( 2) 00:08:24.561 21374.818 - 21475.643: 97.1133% ( 3) 00:08:24.561 21475.643 - 21576.468: 97.2458% ( 10) 00:08:24.561 21576.468 - 21677.292: 97.3517% ( 8) 00:08:24.561 21677.292 - 21778.117: 97.4576% ( 8) 00:08:24.561 21778.117 - 21878.942: 97.5371% ( 6) 00:08:24.561 21878.942 - 21979.766: 97.6430% ( 8) 00:08:24.561 21979.766 - 22080.591: 97.7357% ( 7) 00:08:24.561 22080.591 - 22181.415: 97.8019% ( 5) 00:08:24.561 22181.415 - 22282.240: 97.8284% ( 2) 00:08:24.561 22282.240 - 22383.065: 97.9078% ( 6) 00:08:24.561 22383.065 - 22483.889: 97.9476% ( 3) 00:08:24.561 22483.889 - 22584.714: 98.0005% ( 4) 00:08:24.561 22584.714 - 22685.538: 98.0535% ( 4) 00:08:24.561 22685.538 - 22786.363: 98.1065% ( 4) 00:08:24.561 22786.363 - 22887.188: 98.1594% ( 4) 00:08:24.561 22887.188 - 22988.012: 98.1859% ( 2) 00:08:24.561 22988.012 - 23088.837: 98.2389% ( 4) 00:08:24.561 23088.837 - 23189.662: 98.2786% ( 3) 00:08:24.561 23189.662 - 23290.486: 98.3051% ( 2) 00:08:24.561 32062.228 - 32263.877: 98.3316% ( 2) 00:08:24.561 32263.877 - 32465.526: 98.4375% ( 8) 00:08:24.561 32465.526 - 32667.175: 98.5434% ( 8) 00:08:24.561 32667.175 - 32868.825: 98.6494% ( 8) 00:08:24.561 32868.825 - 33070.474: 98.7288% ( 6) 00:08:24.561 33070.474 - 33272.123: 98.8745% ( 11) 00:08:24.561 33272.123 - 33473.772: 98.9539% ( 6) 00:08:24.561 33473.772 - 33675.422: 99.0466% ( 7) 00:08:24.561 33675.422 - 33877.071: 99.1525% ( 8) 00:08:24.561 40329.846 - 40531.495: 99.1790% ( 2) 00:08:24.561 40531.495 - 40733.145: 99.2585% ( 6) 00:08:24.561 40733.145 - 40934.794: 99.3512% ( 7) 00:08:24.561 40934.794 - 41136.443: 99.4571% ( 8) 00:08:24.561 41136.443 - 41338.092: 99.5498% ( 7) 00:08:24.561 41338.092 - 41539.742: 99.6557% ( 8) 00:08:24.561 41539.742 - 41741.391: 99.7617% ( 8) 00:08:24.561 41741.391 - 41943.040: 99.8543% ( 7) 00:08:24.561 41943.040 - 42144.689: 99.9603% ( 8) 00:08:24.561 42144.689 - 42346.338: 100.0000% ( 3) 00:08:24.561 00:08:24.561 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:24.561 ============================================================================== 00:08:24.561 Range in us Cumulative IO count 00:08:24.561 9931.225 - 9981.637: 0.0397% ( 3) 00:08:24.561 9981.637 - 10032.049: 0.1059% ( 5) 00:08:24.561 10032.049 - 10082.462: 0.1457% ( 3) 00:08:24.561 10082.462 - 10132.874: 0.1589% ( 1) 00:08:24.561 10132.874 - 10183.286: 0.1986% ( 3) 00:08:24.561 10183.286 - 10233.698: 0.2648% ( 5) 00:08:24.561 10233.698 - 10284.111: 0.2913% ( 2) 00:08:24.561 10284.111 - 10334.523: 0.3443% ( 4) 00:08:24.561 10334.523 - 10384.935: 0.3708% ( 2) 00:08:24.561 10384.935 - 10435.348: 0.4105% ( 3) 00:08:24.561 10435.348 - 10485.760: 0.4502% ( 3) 00:08:24.561 10485.760 - 10536.172: 0.4899% ( 3) 00:08:24.561 10536.172 - 10586.585: 0.5429% ( 4) 00:08:24.561 10586.585 - 10636.997: 0.5826% ( 3) 00:08:24.561 10636.997 - 10687.409: 0.6091% ( 2) 00:08:24.561 10687.409 - 10737.822: 0.6488% ( 3) 00:08:24.561 10737.822 - 10788.234: 0.7018% ( 4) 00:08:24.561 10788.234 - 10838.646: 0.7415% ( 3) 00:08:24.561 10838.646 - 10889.058: 0.7812% ( 3) 00:08:24.561 10889.058 - 10939.471: 0.8210% ( 3) 00:08:24.561 10939.471 - 10989.883: 0.8475% ( 2) 00:08:24.561 13208.025 - 13308.849: 0.8739% ( 2) 00:08:24.561 13308.849 - 13409.674: 0.9269% ( 4) 00:08:24.561 13409.674 - 13510.498: 1.0064% ( 6) 00:08:24.561 13510.498 - 13611.323: 1.1917% ( 14) 00:08:24.561 13611.323 - 13712.148: 1.3109% ( 9) 00:08:24.561 13712.148 - 13812.972: 1.4566% ( 11) 00:08:24.561 13812.972 - 13913.797: 1.6552% ( 15) 00:08:24.561 13913.797 - 14014.622: 1.9597% ( 23) 00:08:24.561 14014.622 - 14115.446: 2.4497% ( 37) 00:08:24.561 14115.446 - 14216.271: 3.1912% ( 56) 00:08:24.561 14216.271 - 14317.095: 4.3300% ( 86) 00:08:24.561 14317.095 - 14417.920: 5.6409% ( 99) 00:08:24.561 14417.920 - 14518.745: 7.1769% ( 116) 00:08:24.561 14518.745 - 14619.569: 8.9645% ( 135) 00:08:24.561 14619.569 - 14720.394: 10.6329% ( 126) 00:08:24.561 14720.394 - 14821.218: 12.2749% ( 124) 00:08:24.561 14821.218 - 14922.043: 14.3935% ( 160) 00:08:24.561 14922.043 - 15022.868: 16.7638% ( 179) 00:08:24.561 15022.868 - 15123.692: 19.4386% ( 202) 00:08:24.561 15123.692 - 15224.517: 21.9942% ( 193) 00:08:24.561 15224.517 - 15325.342: 24.5498% ( 193) 00:08:24.561 15325.342 - 15426.166: 27.1186% ( 194) 00:08:24.561 15426.166 - 15526.991: 29.5418% ( 183) 00:08:24.561 15526.991 - 15627.815: 31.8194% ( 172) 00:08:24.561 15627.815 - 15728.640: 34.1896% ( 179) 00:08:24.561 15728.640 - 15829.465: 36.2156% ( 153) 00:08:24.561 15829.465 - 15930.289: 38.4534% ( 169) 00:08:24.561 15930.289 - 16031.114: 40.6382% ( 165) 00:08:24.561 16031.114 - 16131.938: 42.6774% ( 154) 00:08:24.561 16131.938 - 16232.763: 44.6372% ( 148) 00:08:24.561 16232.763 - 16333.588: 46.3983% ( 133) 00:08:24.562 16333.588 - 16434.412: 48.0667% ( 126) 00:08:24.562 16434.412 - 16535.237: 49.7484% ( 127) 00:08:24.562 16535.237 - 16636.062: 51.5890% ( 139) 00:08:24.562 16636.062 - 16736.886: 53.4296% ( 139) 00:08:24.562 16736.886 - 16837.711: 55.3099% ( 142) 00:08:24.562 16837.711 - 16938.535: 57.2431% ( 146) 00:08:24.562 16938.535 - 17039.360: 59.1499% ( 144) 00:08:24.562 17039.360 - 17140.185: 60.8183% ( 126) 00:08:24.562 17140.185 - 17241.009: 62.5265% ( 129) 00:08:24.562 17241.009 - 17341.834: 64.3671% ( 139) 00:08:24.562 17341.834 - 17442.658: 65.9428% ( 119) 00:08:24.562 17442.658 - 17543.483: 67.7834% ( 139) 00:08:24.562 17543.483 - 17644.308: 69.4915% ( 129) 00:08:24.562 17644.308 - 17745.132: 71.0938% ( 121) 00:08:24.562 17745.132 - 17845.957: 72.6165% ( 115) 00:08:24.562 17845.957 - 17946.782: 74.0731% ( 110) 00:08:24.562 17946.782 - 18047.606: 75.4502% ( 104) 00:08:24.562 18047.606 - 18148.431: 76.8406% ( 105) 00:08:24.562 18148.431 - 18249.255: 78.2309% ( 105) 00:08:24.562 18249.255 - 18350.080: 79.7140% ( 112) 00:08:24.562 18350.080 - 18450.905: 81.1573% ( 109) 00:08:24.562 18450.905 - 18551.729: 82.7463% ( 120) 00:08:24.562 18551.729 - 18652.554: 84.0307% ( 97) 00:08:24.562 18652.554 - 18753.378: 85.3549% ( 100) 00:08:24.562 18753.378 - 18854.203: 86.4407% ( 82) 00:08:24.562 18854.203 - 18955.028: 87.6059% ( 88) 00:08:24.562 18955.028 - 19055.852: 88.7447% ( 86) 00:08:24.562 19055.852 - 19156.677: 89.8702% ( 85) 00:08:24.562 19156.677 - 19257.502: 90.7574% ( 67) 00:08:24.562 19257.502 - 19358.326: 91.5651% ( 61) 00:08:24.562 19358.326 - 19459.151: 92.2934% ( 55) 00:08:24.562 19459.151 - 19559.975: 92.8363% ( 41) 00:08:24.562 19559.975 - 19660.800: 93.1939% ( 27) 00:08:24.562 19660.800 - 19761.625: 93.4587% ( 20) 00:08:24.562 19761.625 - 19862.449: 93.7103% ( 19) 00:08:24.562 19862.449 - 19963.274: 94.0016% ( 22) 00:08:24.562 19963.274 - 20064.098: 94.2929% ( 22) 00:08:24.562 20064.098 - 20164.923: 94.6239% ( 25) 00:08:24.562 20164.923 - 20265.748: 94.8358% ( 16) 00:08:24.562 20265.748 - 20366.572: 95.0742% ( 18) 00:08:24.562 20366.572 - 20467.397: 95.3655% ( 22) 00:08:24.562 20467.397 - 20568.222: 95.6435% ( 21) 00:08:24.562 20568.222 - 20669.046: 95.9084% ( 20) 00:08:24.562 20669.046 - 20769.871: 96.1202% ( 16) 00:08:24.562 20769.871 - 20870.695: 96.3453% ( 17) 00:08:24.562 20870.695 - 20971.520: 96.5704% ( 17) 00:08:24.562 20971.520 - 21072.345: 96.7558% ( 14) 00:08:24.562 21072.345 - 21173.169: 96.9280% ( 13) 00:08:24.562 21173.169 - 21273.994: 97.0869% ( 12) 00:08:24.562 21273.994 - 21374.818: 97.1796% ( 7) 00:08:24.562 21374.818 - 21475.643: 97.2325% ( 4) 00:08:24.562 21475.643 - 21576.468: 97.2855% ( 4) 00:08:24.562 21576.468 - 21677.292: 97.3517% ( 5) 00:08:24.562 21677.292 - 21778.117: 97.4841% ( 10) 00:08:24.562 21778.117 - 21878.942: 97.6033% ( 9) 00:08:24.562 21878.942 - 21979.766: 97.6562% ( 4) 00:08:24.562 21979.766 - 22080.591: 97.7092% ( 4) 00:08:24.562 22080.591 - 22181.415: 97.7754% ( 5) 00:08:24.562 22181.415 - 22282.240: 97.8416% ( 5) 00:08:24.562 22282.240 - 22383.065: 97.9078% ( 5) 00:08:24.562 22383.065 - 22483.889: 97.9740% ( 5) 00:08:24.562 22483.889 - 22584.714: 98.0403% ( 5) 00:08:24.562 22584.714 - 22685.538: 98.0932% ( 4) 00:08:24.562 22685.538 - 22786.363: 98.1329% ( 3) 00:08:24.562 22786.363 - 22887.188: 98.1727% ( 3) 00:08:24.562 22887.188 - 22988.012: 98.2124% ( 3) 00:08:24.562 22988.012 - 23088.837: 98.2786% ( 5) 00:08:24.562 23088.837 - 23189.662: 98.3051% ( 2) 00:08:24.562 31457.280 - 31658.929: 98.3448% ( 3) 00:08:24.562 31658.929 - 31860.578: 98.4375% ( 7) 00:08:24.562 31860.578 - 32062.228: 98.5567% ( 9) 00:08:24.562 32062.228 - 32263.877: 98.6758% ( 9) 00:08:24.562 32263.877 - 32465.526: 98.7685% ( 7) 00:08:24.562 32465.526 - 32667.175: 98.8877% ( 9) 00:08:24.562 32667.175 - 32868.825: 99.0069% ( 9) 00:08:24.562 32868.825 - 33070.474: 99.1261% ( 9) 00:08:24.562 33070.474 - 33272.123: 99.1525% ( 2) 00:08:24.562 39523.249 - 39724.898: 99.2452% ( 7) 00:08:24.562 39724.898 - 39926.548: 99.3644% ( 9) 00:08:24.562 39926.548 - 40128.197: 99.4439% ( 6) 00:08:24.562 40128.197 - 40329.846: 99.5365% ( 7) 00:08:24.562 40329.846 - 40531.495: 99.6425% ( 8) 00:08:24.562 40531.495 - 40733.145: 99.7617% ( 9) 00:08:24.562 40733.145 - 40934.794: 99.8676% ( 8) 00:08:24.562 40934.794 - 41136.443: 99.9868% ( 9) 00:08:24.562 41136.443 - 41338.092: 100.0000% ( 1) 00:08:24.562 00:08:24.562 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:24.562 ============================================================================== 00:08:24.562 Range in us Cumulative IO count 00:08:24.562 7461.022 - 7511.434: 0.0132% ( 1) 00:08:24.562 7511.434 - 7561.846: 0.1059% ( 7) 00:08:24.562 7561.846 - 7612.258: 0.1457% ( 3) 00:08:24.562 7612.258 - 7662.671: 0.1854% ( 3) 00:08:24.562 7662.671 - 7713.083: 0.2119% ( 2) 00:08:24.562 7713.083 - 7763.495: 0.2516% ( 3) 00:08:24.562 7763.495 - 7813.908: 0.2781% ( 2) 00:08:24.562 7813.908 - 7864.320: 0.3178% ( 3) 00:08:24.562 7864.320 - 7914.732: 0.3575% ( 3) 00:08:24.562 7914.732 - 7965.145: 0.3972% ( 3) 00:08:24.562 7965.145 - 8015.557: 0.4502% ( 4) 00:08:24.562 8015.557 - 8065.969: 0.4899% ( 3) 00:08:24.562 8065.969 - 8116.382: 0.5429% ( 4) 00:08:24.562 8116.382 - 8166.794: 0.5826% ( 3) 00:08:24.562 8166.794 - 8217.206: 0.6224% ( 3) 00:08:24.562 8217.206 - 8267.618: 0.6753% ( 4) 00:08:24.562 8267.618 - 8318.031: 0.7150% ( 3) 00:08:24.562 8318.031 - 8368.443: 0.7548% ( 3) 00:08:24.562 8368.443 - 8418.855: 0.7945% ( 3) 00:08:24.562 8418.855 - 8469.268: 0.8342% ( 3) 00:08:24.562 8469.268 - 8519.680: 0.8475% ( 1) 00:08:24.562 13107.200 - 13208.025: 0.8607% ( 1) 00:08:24.562 13208.025 - 13308.849: 0.9137% ( 4) 00:08:24.562 13308.849 - 13409.674: 1.0064% ( 7) 00:08:24.563 13409.674 - 13510.498: 1.0593% ( 4) 00:08:24.563 13510.498 - 13611.323: 1.1255% ( 5) 00:08:24.563 13611.323 - 13712.148: 1.2050% ( 6) 00:08:24.563 13712.148 - 13812.972: 1.3374% ( 10) 00:08:24.563 13812.972 - 13913.797: 1.7082% ( 28) 00:08:24.563 13913.797 - 14014.622: 2.1054% ( 30) 00:08:24.563 14014.622 - 14115.446: 2.5953% ( 37) 00:08:24.563 14115.446 - 14216.271: 3.3104% ( 54) 00:08:24.563 14216.271 - 14317.095: 4.3829% ( 81) 00:08:24.563 14317.095 - 14417.920: 5.4688% ( 82) 00:08:24.563 14417.920 - 14518.745: 6.7135% ( 94) 00:08:24.563 14518.745 - 14619.569: 8.1435% ( 108) 00:08:24.563 14619.569 - 14720.394: 9.6133% ( 111) 00:08:24.563 14720.394 - 14821.218: 11.3083% ( 128) 00:08:24.563 14821.218 - 14922.043: 13.5064% ( 166) 00:08:24.563 14922.043 - 15022.868: 16.1282% ( 198) 00:08:24.563 15022.868 - 15123.692: 18.8294% ( 204) 00:08:24.563 15123.692 - 15224.517: 21.4380% ( 197) 00:08:24.563 15224.517 - 15325.342: 24.0069% ( 194) 00:08:24.563 15325.342 - 15426.166: 26.5625% ( 193) 00:08:24.563 15426.166 - 15526.991: 28.9725% ( 182) 00:08:24.563 15526.991 - 15627.815: 31.2632% ( 173) 00:08:24.563 15627.815 - 15728.640: 33.3686% ( 159) 00:08:24.563 15728.640 - 15829.465: 35.5270% ( 163) 00:08:24.563 15829.465 - 15930.289: 37.8046% ( 172) 00:08:24.563 15930.289 - 16031.114: 40.4264% ( 198) 00:08:24.563 16031.114 - 16131.938: 42.5583% ( 161) 00:08:24.563 16131.938 - 16232.763: 44.7564% ( 166) 00:08:24.563 16232.763 - 16333.588: 46.6367% ( 142) 00:08:24.563 16333.588 - 16434.412: 48.3713% ( 131) 00:08:24.563 16434.412 - 16535.237: 50.1192% ( 132) 00:08:24.563 16535.237 - 16636.062: 51.8008% ( 127) 00:08:24.563 16636.062 - 16736.886: 53.6414% ( 139) 00:08:24.563 16736.886 - 16837.711: 55.6806% ( 154) 00:08:24.563 16837.711 - 16938.535: 57.7066% ( 153) 00:08:24.563 16938.535 - 17039.360: 59.6663% ( 148) 00:08:24.563 17039.360 - 17140.185: 61.5334% ( 141) 00:08:24.563 17140.185 - 17241.009: 63.0429% ( 114) 00:08:24.563 17241.009 - 17341.834: 64.7775% ( 131) 00:08:24.563 17341.834 - 17442.658: 66.5651% ( 135) 00:08:24.563 17442.658 - 17543.483: 68.4322% ( 141) 00:08:24.563 17543.483 - 17644.308: 70.3655% ( 146) 00:08:24.563 17644.308 - 17745.132: 72.2722% ( 144) 00:08:24.563 17745.132 - 17845.957: 73.8215% ( 117) 00:08:24.563 17845.957 - 17946.782: 75.3178% ( 113) 00:08:24.563 17946.782 - 18047.606: 76.6817% ( 103) 00:08:24.563 18047.606 - 18148.431: 78.1250% ( 109) 00:08:24.563 18148.431 - 18249.255: 79.4889% ( 103) 00:08:24.563 18249.255 - 18350.080: 80.7336% ( 94) 00:08:24.563 18350.080 - 18450.905: 81.9253% ( 90) 00:08:24.563 18450.905 - 18551.729: 83.1038% ( 89) 00:08:24.563 18551.729 - 18652.554: 84.0837% ( 74) 00:08:24.563 18652.554 - 18753.378: 85.0636% ( 74) 00:08:24.563 18753.378 - 18854.203: 86.0832% ( 77) 00:08:24.563 18854.203 - 18955.028: 86.9836% ( 68) 00:08:24.563 18955.028 - 19055.852: 87.8178% ( 63) 00:08:24.563 19055.852 - 19156.677: 88.7182% ( 68) 00:08:24.563 19156.677 - 19257.502: 89.6584% ( 71) 00:08:24.563 19257.502 - 19358.326: 90.5720% ( 69) 00:08:24.563 19358.326 - 19459.151: 91.3400% ( 58) 00:08:24.563 19459.151 - 19559.975: 92.1743% ( 63) 00:08:24.563 19559.975 - 19660.800: 92.8231% ( 49) 00:08:24.563 19660.800 - 19761.625: 93.3925% ( 43) 00:08:24.563 19761.625 - 19862.449: 93.9089% ( 39) 00:08:24.563 19862.449 - 19963.274: 94.4518% ( 41) 00:08:24.563 19963.274 - 20064.098: 94.9947% ( 41) 00:08:24.563 20064.098 - 20164.923: 95.4052% ( 31) 00:08:24.563 20164.923 - 20265.748: 95.7627% ( 27) 00:08:24.563 20265.748 - 20366.572: 96.0673% ( 23) 00:08:24.563 20366.572 - 20467.397: 96.3189% ( 19) 00:08:24.563 20467.397 - 20568.222: 96.5704% ( 19) 00:08:24.563 20568.222 - 20669.046: 96.7823% ( 16) 00:08:24.563 20669.046 - 20769.871: 96.9677% ( 14) 00:08:24.563 20769.871 - 20870.695: 97.1001% ( 10) 00:08:24.563 20870.695 - 20971.520: 97.2060% ( 8) 00:08:24.563 20971.520 - 21072.345: 97.3120% ( 8) 00:08:24.563 21072.345 - 21173.169: 97.4047% ( 7) 00:08:24.563 21173.169 - 21273.994: 97.4576% ( 4) 00:08:24.563 21778.117 - 21878.942: 97.4841% ( 2) 00:08:24.563 21878.942 - 21979.766: 97.5636% ( 6) 00:08:24.563 21979.766 - 22080.591: 97.6298% ( 5) 00:08:24.563 22080.591 - 22181.415: 97.7092% ( 6) 00:08:24.563 22181.415 - 22282.240: 97.7754% ( 5) 00:08:24.563 22282.240 - 22383.065: 97.8549% ( 6) 00:08:24.563 22383.065 - 22483.889: 97.9211% ( 5) 00:08:24.563 22483.889 - 22584.714: 98.0005% ( 6) 00:08:24.563 22584.714 - 22685.538: 98.0667% ( 5) 00:08:24.563 22685.538 - 22786.363: 98.1329% ( 5) 00:08:24.563 22786.363 - 22887.188: 98.2124% ( 6) 00:08:24.563 22887.188 - 22988.012: 98.2786% ( 5) 00:08:24.563 22988.012 - 23088.837: 98.3051% ( 2) 00:08:24.563 31658.929 - 31860.578: 98.3183% ( 1) 00:08:24.563 31860.578 - 32062.228: 98.4243% ( 8) 00:08:24.563 32062.228 - 32263.877: 98.5169% ( 7) 00:08:24.563 32263.877 - 32465.526: 98.6229% ( 8) 00:08:24.563 32465.526 - 32667.175: 98.7421% ( 9) 00:08:24.563 32667.175 - 32868.825: 98.8612% ( 9) 00:08:24.563 32868.825 - 33070.474: 98.9804% ( 9) 00:08:24.563 33070.474 - 33272.123: 99.0863% ( 8) 00:08:24.563 33272.123 - 33473.772: 99.1525% ( 5) 00:08:24.563 39926.548 - 40128.197: 99.2055% ( 4) 00:08:24.563 40128.197 - 40329.846: 99.2982% ( 7) 00:08:24.563 40329.846 - 40531.495: 99.4041% ( 8) 00:08:24.563 40531.495 - 40733.145: 99.5101% ( 8) 00:08:24.563 40733.145 - 40934.794: 99.6292% ( 9) 00:08:24.563 40934.794 - 41136.443: 99.7484% ( 9) 00:08:24.563 41136.443 - 41338.092: 99.8676% ( 9) 00:08:24.563 41338.092 - 41539.742: 99.9868% ( 9) 00:08:24.563 41539.742 - 41741.391: 100.0000% ( 1) 00:08:24.563 00:08:24.563 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:24.563 ============================================================================== 00:08:24.563 Range in us Cumulative IO count 00:08:24.563 6402.363 - 6427.569: 0.0132% ( 1) 00:08:24.563 6427.569 - 6452.775: 0.0662% ( 4) 00:08:24.563 6452.775 - 6503.188: 0.1059% ( 3) 00:08:24.563 6503.188 - 6553.600: 0.1324% ( 2) 00:08:24.563 6553.600 - 6604.012: 0.1721% ( 3) 00:08:24.563 6604.012 - 6654.425: 0.2251% ( 4) 00:08:24.564 6654.425 - 6704.837: 0.2648% ( 3) 00:08:24.564 6704.837 - 6755.249: 0.3046% ( 3) 00:08:24.564 6755.249 - 6805.662: 0.3443% ( 3) 00:08:24.564 6805.662 - 6856.074: 0.3972% ( 4) 00:08:24.564 6856.074 - 6906.486: 0.4370% ( 3) 00:08:24.564 6906.486 - 6956.898: 0.4899% ( 4) 00:08:24.564 6956.898 - 7007.311: 0.5164% ( 2) 00:08:24.564 7007.311 - 7057.723: 0.5561% ( 3) 00:08:24.564 7057.723 - 7108.135: 0.5959% ( 3) 00:08:24.564 7108.135 - 7158.548: 0.6224% ( 2) 00:08:24.564 7158.548 - 7208.960: 0.6488% ( 2) 00:08:24.564 7208.960 - 7259.372: 0.6886% ( 3) 00:08:24.564 7259.372 - 7309.785: 0.7283% ( 3) 00:08:24.564 7309.785 - 7360.197: 0.7680% ( 3) 00:08:24.564 7360.197 - 7410.609: 0.8210% ( 4) 00:08:24.564 7410.609 - 7461.022: 0.8475% ( 2) 00:08:24.564 13208.025 - 13308.849: 0.8739% ( 2) 00:08:24.564 13308.849 - 13409.674: 0.9401% ( 5) 00:08:24.564 13409.674 - 13510.498: 1.0461% ( 8) 00:08:24.564 13510.498 - 13611.323: 1.1653% ( 9) 00:08:24.564 13611.323 - 13712.148: 1.2977% ( 10) 00:08:24.564 13712.148 - 13812.972: 1.6155% ( 24) 00:08:24.564 13812.972 - 13913.797: 1.9730% ( 27) 00:08:24.564 13913.797 - 14014.622: 2.4762% ( 38) 00:08:24.564 14014.622 - 14115.446: 3.1912% ( 54) 00:08:24.564 14115.446 - 14216.271: 3.8798% ( 52) 00:08:24.564 14216.271 - 14317.095: 4.7537% ( 66) 00:08:24.564 14317.095 - 14417.920: 5.7865% ( 78) 00:08:24.564 14417.920 - 14518.745: 7.0577% ( 96) 00:08:24.564 14518.745 - 14619.569: 8.3422% ( 97) 00:08:24.564 14619.569 - 14720.394: 9.8385% ( 113) 00:08:24.564 14720.394 - 14821.218: 11.7585% ( 145) 00:08:24.564 14821.218 - 14922.043: 13.8374% ( 157) 00:08:24.564 14922.043 - 15022.868: 16.0885% ( 170) 00:08:24.564 15022.868 - 15123.692: 18.2733% ( 165) 00:08:24.564 15123.692 - 15224.517: 20.6435% ( 179) 00:08:24.564 15224.517 - 15325.342: 23.1727% ( 191) 00:08:24.564 15325.342 - 15426.166: 25.3178% ( 162) 00:08:24.564 15426.166 - 15526.991: 27.6218% ( 174) 00:08:24.564 15526.991 - 15627.815: 29.9126% ( 173) 00:08:24.564 15627.815 - 15728.640: 32.1239% ( 167) 00:08:24.564 15728.640 - 15829.465: 34.6531% ( 191) 00:08:24.564 15829.465 - 15930.289: 37.1557% ( 189) 00:08:24.564 15930.289 - 16031.114: 39.2611% ( 159) 00:08:24.564 16031.114 - 16131.938: 41.1944% ( 146) 00:08:24.564 16131.938 - 16232.763: 43.2203% ( 153) 00:08:24.564 16232.763 - 16333.588: 45.2198% ( 151) 00:08:24.564 16333.588 - 16434.412: 47.2855% ( 156) 00:08:24.564 16434.412 - 16535.237: 49.3776% ( 158) 00:08:24.564 16535.237 - 16636.062: 51.2844% ( 144) 00:08:24.564 16636.062 - 16736.886: 53.3369% ( 155) 00:08:24.564 16736.886 - 16837.711: 55.4952% ( 163) 00:08:24.564 16837.711 - 16938.535: 57.2299% ( 131) 00:08:24.564 16938.535 - 17039.360: 59.2691% ( 154) 00:08:24.564 17039.360 - 17140.185: 61.0964% ( 138) 00:08:24.564 17140.185 - 17241.009: 63.1224% ( 153) 00:08:24.564 17241.009 - 17341.834: 64.9894% ( 141) 00:08:24.564 17341.834 - 17442.658: 66.8035% ( 137) 00:08:24.564 17442.658 - 17543.483: 68.5117% ( 129) 00:08:24.564 17543.483 - 17644.308: 70.5244% ( 152) 00:08:24.564 17644.308 - 17745.132: 72.4179% ( 143) 00:08:24.564 17745.132 - 17845.957: 74.1790% ( 133) 00:08:24.564 17845.957 - 17946.782: 75.8077% ( 123) 00:08:24.564 17946.782 - 18047.606: 77.2908% ( 112) 00:08:24.564 18047.606 - 18148.431: 78.7209% ( 108) 00:08:24.564 18148.431 - 18249.255: 80.0185% ( 98) 00:08:24.564 18249.255 - 18350.080: 81.1706% ( 87) 00:08:24.564 18350.080 - 18450.905: 82.4815% ( 99) 00:08:24.564 18450.905 - 18551.729: 83.8189% ( 101) 00:08:24.564 18551.729 - 18652.554: 85.0371% ( 92) 00:08:24.564 18652.554 - 18753.378: 86.1096% ( 81) 00:08:24.564 18753.378 - 18854.203: 87.0630% ( 72) 00:08:24.564 18854.203 - 18955.028: 87.8708% ( 61) 00:08:24.564 18955.028 - 19055.852: 88.6255% ( 57) 00:08:24.564 19055.852 - 19156.677: 89.5392% ( 69) 00:08:24.564 19156.677 - 19257.502: 90.3337% ( 60) 00:08:24.564 19257.502 - 19358.326: 91.1282% ( 60) 00:08:24.564 19358.326 - 19459.151: 91.7903% ( 50) 00:08:24.564 19459.151 - 19559.975: 92.5318% ( 56) 00:08:24.564 19559.975 - 19660.800: 93.0217% ( 37) 00:08:24.564 19660.800 - 19761.625: 93.5381% ( 39) 00:08:24.564 19761.625 - 19862.449: 94.0413% ( 38) 00:08:24.564 19862.449 - 19963.274: 94.4518% ( 31) 00:08:24.564 19963.274 - 20064.098: 94.9153% ( 35) 00:08:24.564 20064.098 - 20164.923: 95.3390% ( 32) 00:08:24.564 20164.923 - 20265.748: 95.7627% ( 32) 00:08:24.564 20265.748 - 20366.572: 96.0938% ( 25) 00:08:24.564 20366.572 - 20467.397: 96.3056% ( 16) 00:08:24.564 20467.397 - 20568.222: 96.5042% ( 15) 00:08:24.564 20568.222 - 20669.046: 96.6764% ( 13) 00:08:24.564 20669.046 - 20769.871: 96.8088% ( 10) 00:08:24.564 20769.871 - 20870.695: 96.9544% ( 11) 00:08:24.564 20870.695 - 20971.520: 97.1001% ( 11) 00:08:24.564 20971.520 - 21072.345: 97.2060% ( 8) 00:08:24.564 21072.345 - 21173.169: 97.3120% ( 8) 00:08:24.564 21173.169 - 21273.994: 97.4709% ( 12) 00:08:24.564 21273.994 - 21374.818: 97.5636% ( 7) 00:08:24.564 21374.818 - 21475.643: 97.6695% ( 8) 00:08:24.564 21475.643 - 21576.468: 97.7622% ( 7) 00:08:24.564 21576.468 - 21677.292: 97.8284% ( 5) 00:08:24.564 21677.292 - 21778.117: 97.9078% ( 6) 00:08:24.564 21778.117 - 21878.942: 97.9873% ( 6) 00:08:24.564 21878.942 - 21979.766: 98.0535% ( 5) 00:08:24.564 21979.766 - 22080.591: 98.1065% ( 4) 00:08:24.564 22080.591 - 22181.415: 98.1859% ( 6) 00:08:24.564 22181.415 - 22282.240: 98.2521% ( 5) 00:08:24.564 22282.240 - 22383.065: 98.3051% ( 4) 00:08:24.564 31053.982 - 31255.631: 98.3448% ( 3) 00:08:24.564 31255.631 - 31457.280: 98.4640% ( 9) 00:08:24.564 31457.280 - 31658.929: 98.5699% ( 8) 00:08:24.564 31658.929 - 31860.578: 98.6891% ( 9) 00:08:24.564 31860.578 - 32062.228: 98.7950% ( 8) 00:08:24.564 32062.228 - 32263.877: 98.9142% ( 9) 00:08:24.564 32263.877 - 32465.526: 99.0201% ( 8) 00:08:24.564 32465.526 - 32667.175: 99.1393% ( 9) 00:08:24.564 32667.175 - 32868.825: 99.1525% ( 1) 00:08:24.564 39119.951 - 39321.600: 99.1790% ( 2) 00:08:24.564 39321.600 - 39523.249: 99.2850% ( 8) 00:08:24.564 39523.249 - 39724.898: 99.3909% ( 8) 00:08:24.564 39724.898 - 39926.548: 99.5101% ( 9) 00:08:24.564 39926.548 - 40128.197: 99.6160% ( 8) 00:08:24.564 40128.197 - 40329.846: 99.7219% ( 8) 00:08:24.564 40329.846 - 40531.495: 99.8279% ( 8) 00:08:24.564 40531.495 - 40733.145: 99.9338% ( 8) 00:08:24.564 40733.145 - 40934.794: 100.0000% ( 5) 00:08:24.564 00:08:24.564 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:24.564 ============================================================================== 00:08:24.564 Range in us Cumulative IO count 00:08:24.564 5419.323 - 5444.529: 0.0132% ( 1) 00:08:24.564 5444.529 - 5469.735: 0.0265% ( 1) 00:08:24.564 5469.735 - 5494.942: 0.0530% ( 2) 00:08:24.564 5494.942 - 5520.148: 0.0662% ( 1) 00:08:24.564 5520.148 - 5545.354: 0.0927% ( 2) 00:08:24.564 5545.354 - 5570.560: 0.1192% ( 2) 00:08:24.564 5570.560 - 5595.766: 0.1457% ( 2) 00:08:24.564 5595.766 - 5620.972: 0.1721% ( 2) 00:08:24.564 5620.972 - 5646.178: 0.1854% ( 1) 00:08:24.564 5646.178 - 5671.385: 0.2119% ( 2) 00:08:24.564 5671.385 - 5696.591: 0.2383% ( 2) 00:08:24.564 5696.591 - 5721.797: 0.2516% ( 1) 00:08:24.564 5721.797 - 5747.003: 0.2781% ( 2) 00:08:24.564 5747.003 - 5772.209: 0.2913% ( 1) 00:08:24.564 5772.209 - 5797.415: 0.3178% ( 2) 00:08:24.564 5797.415 - 5822.622: 0.3443% ( 2) 00:08:24.564 5847.828 - 5873.034: 0.3575% ( 1) 00:08:24.564 5873.034 - 5898.240: 0.3708% ( 1) 00:08:24.564 5898.240 - 5923.446: 0.3972% ( 2) 00:08:24.564 5923.446 - 5948.652: 0.4105% ( 1) 00:08:24.564 5948.652 - 5973.858: 0.4370% ( 2) 00:08:24.564 5973.858 - 5999.065: 0.4502% ( 1) 00:08:24.564 5999.065 - 6024.271: 0.4635% ( 1) 00:08:24.564 6024.271 - 6049.477: 0.4899% ( 2) 00:08:24.564 6049.477 - 6074.683: 0.5032% ( 1) 00:08:24.565 6074.683 - 6099.889: 0.5297% ( 2) 00:08:24.565 6099.889 - 6125.095: 0.5429% ( 1) 00:08:24.565 6125.095 - 6150.302: 0.5694% ( 2) 00:08:24.565 6150.302 - 6175.508: 0.5826% ( 1) 00:08:24.565 6175.508 - 6200.714: 0.6091% ( 2) 00:08:24.565 6200.714 - 6225.920: 0.6224% ( 1) 00:08:24.565 6225.920 - 6251.126: 0.6356% ( 1) 00:08:24.565 6251.126 - 6276.332: 0.6621% ( 2) 00:08:24.565 6276.332 - 6301.538: 0.6753% ( 1) 00:08:24.565 6301.538 - 6326.745: 0.7018% ( 2) 00:08:24.565 6326.745 - 6351.951: 0.7150% ( 1) 00:08:24.565 6351.951 - 6377.157: 0.7283% ( 1) 00:08:24.565 6377.157 - 6402.363: 0.7415% ( 1) 00:08:24.565 6402.363 - 6427.569: 0.7548% ( 1) 00:08:24.565 6427.569 - 6452.775: 0.7680% ( 1) 00:08:24.565 6452.775 - 6503.188: 0.8077% ( 3) 00:08:24.565 6503.188 - 6553.600: 0.8342% ( 2) 00:08:24.565 6553.600 - 6604.012: 0.8475% ( 1) 00:08:24.565 13006.375 - 13107.200: 0.8872% ( 3) 00:08:24.565 13107.200 - 13208.025: 0.9269% ( 3) 00:08:24.565 13208.025 - 13308.849: 1.0328% ( 8) 00:08:24.565 13308.849 - 13409.674: 1.1917% ( 12) 00:08:24.565 13409.674 - 13510.498: 1.3109% ( 9) 00:08:24.565 13510.498 - 13611.323: 1.4831% ( 13) 00:08:24.565 13611.323 - 13712.148: 1.6817% ( 15) 00:08:24.565 13712.148 - 13812.972: 2.0392% ( 27) 00:08:24.565 13812.972 - 13913.797: 2.3040% ( 20) 00:08:24.565 13913.797 - 14014.622: 2.6748% ( 28) 00:08:24.565 14014.622 - 14115.446: 3.4031% ( 55) 00:08:24.565 14115.446 - 14216.271: 4.2240% ( 62) 00:08:24.565 14216.271 - 14317.095: 5.2304% ( 76) 00:08:24.565 14317.095 - 14417.920: 6.3692% ( 86) 00:08:24.565 14417.920 - 14518.745: 7.6536% ( 97) 00:08:24.565 14518.745 - 14619.569: 9.2029% ( 117) 00:08:24.565 14619.569 - 14720.394: 10.8581% ( 125) 00:08:24.565 14720.394 - 14821.218: 12.4868% ( 123) 00:08:24.565 14821.218 - 14922.043: 14.5392% ( 155) 00:08:24.565 14922.043 - 15022.868: 16.6578% ( 160) 00:08:24.565 15022.868 - 15123.692: 18.9883% ( 176) 00:08:24.565 15123.692 - 15224.517: 21.5572% ( 194) 00:08:24.565 15224.517 - 15325.342: 24.1525% ( 196) 00:08:24.565 15325.342 - 15426.166: 26.8273% ( 202) 00:08:24.565 15426.166 - 15526.991: 28.8665% ( 154) 00:08:24.565 15526.991 - 15627.815: 31.0514% ( 165) 00:08:24.565 15627.815 - 15728.640: 33.0906% ( 154) 00:08:24.565 15728.640 - 15829.465: 35.4740% ( 180) 00:08:24.565 15829.465 - 15930.289: 37.6059% ( 161) 00:08:24.565 15930.289 - 16031.114: 39.5657% ( 148) 00:08:24.565 16031.114 - 16131.938: 41.3930% ( 138) 00:08:24.565 16131.938 - 16232.763: 43.3792% ( 150) 00:08:24.565 16232.763 - 16333.588: 45.0742% ( 128) 00:08:24.565 16333.588 - 16434.412: 46.9809% ( 144) 00:08:24.565 16434.412 - 16535.237: 48.6096% ( 123) 00:08:24.565 16535.237 - 16636.062: 50.4105% ( 136) 00:08:24.565 16636.062 - 16736.886: 52.1981% ( 135) 00:08:24.565 16736.886 - 16837.711: 54.2638% ( 156) 00:08:24.565 16837.711 - 16938.535: 56.3559% ( 158) 00:08:24.565 16938.535 - 17039.360: 58.3157% ( 148) 00:08:24.565 17039.360 - 17140.185: 60.3151% ( 151) 00:08:24.565 17140.185 - 17241.009: 62.3808% ( 156) 00:08:24.565 17241.009 - 17341.834: 64.5524% ( 164) 00:08:24.565 17341.834 - 17442.658: 66.5122% ( 148) 00:08:24.565 17442.658 - 17543.483: 68.5381% ( 153) 00:08:24.565 17543.483 - 17644.308: 70.6568% ( 160) 00:08:24.565 17644.308 - 17745.132: 72.5238% ( 141) 00:08:24.565 17745.132 - 17845.957: 74.2850% ( 133) 00:08:24.565 17845.957 - 17946.782: 75.8475% ( 118) 00:08:24.565 17946.782 - 18047.606: 77.3570% ( 114) 00:08:24.565 18047.606 - 18148.431: 78.8136% ( 110) 00:08:24.565 18148.431 - 18249.255: 80.3363% ( 115) 00:08:24.565 18249.255 - 18350.080: 81.7135% ( 104) 00:08:24.565 18350.080 - 18450.905: 83.3024% ( 120) 00:08:24.565 18450.905 - 18551.729: 84.4809% ( 89) 00:08:24.565 18551.729 - 18652.554: 85.5535% ( 81) 00:08:24.565 18652.554 - 18753.378: 86.4936% ( 71) 00:08:24.565 18753.378 - 18854.203: 87.3543% ( 65) 00:08:24.565 18854.203 - 18955.028: 88.1488% ( 60) 00:08:24.565 18955.028 - 19055.852: 88.8904% ( 56) 00:08:24.565 19055.852 - 19156.677: 89.5260% ( 48) 00:08:24.565 19156.677 - 19257.502: 90.2940% ( 58) 00:08:24.565 19257.502 - 19358.326: 90.9163% ( 47) 00:08:24.565 19358.326 - 19459.151: 91.4460% ( 40) 00:08:24.565 19459.151 - 19559.975: 91.9359% ( 37) 00:08:24.565 19559.975 - 19660.800: 92.4523% ( 39) 00:08:24.565 19660.800 - 19761.625: 92.9423% ( 37) 00:08:24.565 19761.625 - 19862.449: 93.3925% ( 34) 00:08:24.565 19862.449 - 19963.274: 93.8427% ( 34) 00:08:24.565 19963.274 - 20064.098: 94.2797% ( 33) 00:08:24.565 20064.098 - 20164.923: 94.7166% ( 33) 00:08:24.565 20164.923 - 20265.748: 95.0609% ( 26) 00:08:24.565 20265.748 - 20366.572: 95.3257% ( 20) 00:08:24.565 20366.572 - 20467.397: 95.5508% ( 17) 00:08:24.565 20467.397 - 20568.222: 95.8554% ( 23) 00:08:24.565 20568.222 - 20669.046: 96.1202% ( 20) 00:08:24.565 20669.046 - 20769.871: 96.3586% ( 18) 00:08:24.565 20769.871 - 20870.695: 96.5969% ( 18) 00:08:24.565 20870.695 - 20971.520: 96.8353% ( 18) 00:08:24.565 20971.520 - 21072.345: 97.1001% ( 20) 00:08:24.565 21072.345 - 21173.169: 97.2855% ( 14) 00:08:24.565 21173.169 - 21273.994: 97.4576% ( 13) 00:08:24.565 21273.994 - 21374.818: 97.6430% ( 14) 00:08:24.565 21374.818 - 21475.643: 97.8416% ( 15) 00:08:24.565 21475.643 - 21576.468: 97.9740% ( 10) 00:08:24.565 21576.468 - 21677.292: 98.0800% ( 8) 00:08:24.565 21677.292 - 21778.117: 98.1462% ( 5) 00:08:24.565 21778.117 - 21878.942: 98.1859% ( 3) 00:08:24.565 21878.942 - 21979.766: 98.2124% ( 2) 00:08:24.565 21979.766 - 22080.591: 98.2654% ( 4) 00:08:24.565 22080.591 - 22181.415: 98.2918% ( 2) 00:08:24.565 22181.415 - 22282.240: 98.3051% ( 1) 00:08:24.565 30247.385 - 30449.034: 98.3183% ( 1) 00:08:24.565 30449.034 - 30650.683: 98.4110% ( 7) 00:08:24.565 30650.683 - 30852.332: 98.5169% ( 8) 00:08:24.565 30852.332 - 31053.982: 98.6229% ( 8) 00:08:24.565 31053.982 - 31255.631: 98.7288% ( 8) 00:08:24.565 31255.631 - 31457.280: 98.8347% ( 8) 00:08:24.565 31457.280 - 31658.929: 98.9407% ( 8) 00:08:24.565 31658.929 - 31860.578: 99.0599% ( 9) 00:08:24.565 31860.578 - 32062.228: 99.1525% ( 7) 00:08:24.565 38515.003 - 38716.652: 99.1923% ( 3) 00:08:24.565 38716.652 - 38918.302: 99.2850% ( 7) 00:08:24.565 38918.302 - 39119.951: 99.3776% ( 7) 00:08:24.565 39119.951 - 39321.600: 99.4836% ( 8) 00:08:24.565 39321.600 - 39523.249: 99.6028% ( 9) 00:08:24.565 39523.249 - 39724.898: 99.6954% ( 7) 00:08:24.565 39724.898 - 39926.548: 99.8146% ( 9) 00:08:24.565 39926.548 - 40128.197: 99.9338% ( 9) 00:08:24.565 40128.197 - 40329.846: 100.0000% ( 5) 00:08:24.565 00:08:24.565 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:24.565 ============================================================================== 00:08:24.566 Range in us Cumulative IO count 00:08:24.566 4763.963 - 4789.169: 0.0394% ( 3) 00:08:24.566 4789.169 - 4814.375: 0.0788% ( 3) 00:08:24.566 4814.375 - 4839.582: 0.1313% ( 4) 00:08:24.566 4864.788 - 4889.994: 0.1707% ( 3) 00:08:24.566 4889.994 - 4915.200: 0.1970% ( 2) 00:08:24.566 4915.200 - 4940.406: 0.2232% ( 2) 00:08:24.566 4940.406 - 4965.612: 0.2363% ( 1) 00:08:24.566 4965.612 - 4990.818: 0.2626% ( 2) 00:08:24.566 4990.818 - 5016.025: 0.2757% ( 1) 00:08:24.566 5016.025 - 5041.231: 0.3151% ( 3) 00:08:24.566 5041.231 - 5066.437: 0.3283% ( 1) 00:08:24.566 5066.437 - 5091.643: 0.3414% ( 1) 00:08:24.566 5091.643 - 5116.849: 0.3676% ( 2) 00:08:24.566 5116.849 - 5142.055: 0.3808% ( 1) 00:08:24.566 5142.055 - 5167.262: 0.4070% ( 2) 00:08:24.566 5167.262 - 5192.468: 0.4202% ( 1) 00:08:24.566 5217.674 - 5242.880: 0.4464% ( 2) 00:08:24.566 5242.880 - 5268.086: 0.4596% ( 1) 00:08:24.566 5268.086 - 5293.292: 0.4858% ( 2) 00:08:24.566 5293.292 - 5318.498: 0.4989% ( 1) 00:08:24.566 5318.498 - 5343.705: 0.5121% ( 1) 00:08:24.566 5343.705 - 5368.911: 0.5383% ( 2) 00:08:24.566 5368.911 - 5394.117: 0.5515% ( 1) 00:08:24.566 5394.117 - 5419.323: 0.5646% ( 1) 00:08:24.566 5419.323 - 5444.529: 0.5909% ( 2) 00:08:24.566 5444.529 - 5469.735: 0.6040% ( 1) 00:08:24.566 5469.735 - 5494.942: 0.6303% ( 2) 00:08:24.566 5494.942 - 5520.148: 0.6434% ( 1) 00:08:24.566 5520.148 - 5545.354: 0.6696% ( 2) 00:08:24.566 5545.354 - 5570.560: 0.6828% ( 1) 00:08:24.566 5570.560 - 5595.766: 0.7090% ( 2) 00:08:24.566 5595.766 - 5620.972: 0.7222% ( 1) 00:08:24.566 5620.972 - 5646.178: 0.7484% ( 2) 00:08:24.566 5646.178 - 5671.385: 0.7616% ( 1) 00:08:24.566 5671.385 - 5696.591: 0.7878% ( 2) 00:08:24.566 5696.591 - 5721.797: 0.8009% ( 1) 00:08:24.566 5721.797 - 5747.003: 0.8272% ( 2) 00:08:24.566 5747.003 - 5772.209: 0.8403% ( 1) 00:08:24.566 12905.551 - 13006.375: 0.8797% ( 3) 00:08:24.566 13006.375 - 13107.200: 0.9848% ( 8) 00:08:24.566 13107.200 - 13208.025: 1.1423% ( 12) 00:08:24.566 13208.025 - 13308.849: 1.3130% ( 13) 00:08:24.566 13308.849 - 13409.674: 1.6019% ( 22) 00:08:24.566 13409.674 - 13510.498: 1.8514% ( 19) 00:08:24.566 13510.498 - 13611.323: 2.1271% ( 21) 00:08:24.566 13611.323 - 13712.148: 2.3766% ( 19) 00:08:24.566 13712.148 - 13812.972: 2.6261% ( 19) 00:08:24.566 13812.972 - 13913.797: 2.8493% ( 17) 00:08:24.566 13913.797 - 14014.622: 3.2563% ( 31) 00:08:24.566 14014.622 - 14115.446: 3.7815% ( 40) 00:08:24.566 14115.446 - 14216.271: 4.6481% ( 66) 00:08:24.566 14216.271 - 14317.095: 5.5935% ( 72) 00:08:24.566 14317.095 - 14417.920: 6.8671% ( 97) 00:08:24.566 14417.920 - 14518.745: 8.1670% ( 99) 00:08:24.566 14518.745 - 14619.569: 9.5326% ( 104) 00:08:24.566 14619.569 - 14720.394: 11.2658% ( 132) 00:08:24.566 14720.394 - 14821.218: 13.1303% ( 142) 00:08:24.566 14821.218 - 14922.043: 15.2442% ( 161) 00:08:24.566 14922.043 - 15022.868: 17.6733% ( 185) 00:08:24.566 15022.868 - 15123.692: 20.1681% ( 190) 00:08:24.566 15123.692 - 15224.517: 22.8466% ( 204) 00:08:24.566 15224.517 - 15325.342: 25.1838% ( 178) 00:08:24.566 15325.342 - 15426.166: 27.4554% ( 173) 00:08:24.566 15426.166 - 15526.991: 29.6350% ( 166) 00:08:24.566 15526.991 - 15627.815: 31.6833% ( 156) 00:08:24.566 15627.815 - 15728.640: 33.7841% ( 160) 00:08:24.566 15728.640 - 15829.465: 35.5042% ( 131) 00:08:24.566 15829.465 - 15930.289: 37.3818% ( 143) 00:08:24.566 15930.289 - 16031.114: 39.3514% ( 150) 00:08:24.566 16031.114 - 16131.938: 41.4128% ( 157) 00:08:24.566 16131.938 - 16232.763: 43.1723% ( 134) 00:08:24.566 16232.763 - 16333.588: 44.9317% ( 134) 00:08:24.566 16333.588 - 16434.412: 46.6912% ( 134) 00:08:24.566 16434.412 - 16535.237: 48.6213% ( 147) 00:08:24.566 16535.237 - 16636.062: 50.3020% ( 128) 00:08:24.566 16636.062 - 16736.886: 52.2190% ( 146) 00:08:24.566 16736.886 - 16837.711: 53.9653% ( 133) 00:08:24.566 16837.711 - 16938.535: 56.0924% ( 162) 00:08:24.566 16938.535 - 17039.360: 58.3902% ( 175) 00:08:24.566 17039.360 - 17140.185: 60.5699% ( 166) 00:08:24.566 17140.185 - 17241.009: 62.6970% ( 162) 00:08:24.566 17241.009 - 17341.834: 64.8766% ( 166) 00:08:24.566 17341.834 - 17442.658: 66.9905% ( 161) 00:08:24.566 17442.658 - 17543.483: 69.1570% ( 165) 00:08:24.566 17543.483 - 17644.308: 71.0872% ( 147) 00:08:24.566 17644.308 - 17745.132: 72.9779% ( 144) 00:08:24.566 17745.132 - 17845.957: 74.8687% ( 144) 00:08:24.566 17845.957 - 17946.782: 76.5888% ( 131) 00:08:24.566 17946.782 - 18047.606: 78.1907% ( 122) 00:08:24.566 18047.606 - 18148.431: 79.8713% ( 128) 00:08:24.566 18148.431 - 18249.255: 81.3550% ( 113) 00:08:24.566 18249.255 - 18350.080: 82.7731% ( 108) 00:08:24.566 18350.080 - 18450.905: 83.9942% ( 93) 00:08:24.566 18450.905 - 18551.729: 85.0709% ( 82) 00:08:24.567 18551.729 - 18652.554: 85.9900% ( 70) 00:08:24.567 18652.554 - 18753.378: 86.9091% ( 70) 00:08:24.567 18753.378 - 18854.203: 87.8414% ( 71) 00:08:24.567 18854.203 - 18955.028: 88.6423% ( 61) 00:08:24.567 18955.028 - 19055.852: 89.5089% ( 66) 00:08:24.567 19055.852 - 19156.677: 90.1392% ( 48) 00:08:24.567 19156.677 - 19257.502: 90.7694% ( 48) 00:08:24.567 19257.502 - 19358.326: 91.4259% ( 50) 00:08:24.567 19358.326 - 19459.151: 91.9118% ( 37) 00:08:24.567 19459.151 - 19559.975: 92.3451% ( 33) 00:08:24.567 19559.975 - 19660.800: 92.7652% ( 32) 00:08:24.567 19660.800 - 19761.625: 93.2117% ( 34) 00:08:24.567 19761.625 - 19862.449: 93.6056% ( 30) 00:08:24.567 19862.449 - 19963.274: 93.9863% ( 29) 00:08:24.567 19963.274 - 20064.098: 94.4065% ( 32) 00:08:24.567 20064.098 - 20164.923: 94.7479% ( 26) 00:08:24.567 20164.923 - 20265.748: 95.0368% ( 22) 00:08:24.567 20265.748 - 20366.572: 95.3388% ( 23) 00:08:24.567 20366.572 - 20467.397: 95.6801% ( 26) 00:08:24.567 20467.397 - 20568.222: 95.9559% ( 21) 00:08:24.567 20568.222 - 20669.046: 96.1397% ( 14) 00:08:24.567 20669.046 - 20769.871: 96.3629% ( 17) 00:08:24.567 20769.871 - 20870.695: 96.5336% ( 13) 00:08:24.567 20870.695 - 20971.520: 96.7174% ( 14) 00:08:24.567 20971.520 - 21072.345: 96.9144% ( 15) 00:08:24.567 21072.345 - 21173.169: 97.0851% ( 13) 00:08:24.567 21173.169 - 21273.994: 97.2295% ( 11) 00:08:24.567 21273.994 - 21374.818: 97.3083% ( 6) 00:08:24.567 21374.818 - 21475.643: 97.4265% ( 9) 00:08:24.567 21475.643 - 21576.468: 97.5315% ( 8) 00:08:24.567 21576.468 - 21677.292: 97.6234% ( 7) 00:08:24.567 21677.292 - 21778.117: 97.7153% ( 7) 00:08:24.567 21778.117 - 21878.942: 97.7810% ( 5) 00:08:24.567 21878.942 - 21979.766: 97.8204% ( 3) 00:08:24.567 21979.766 - 22080.591: 97.8729% ( 4) 00:08:24.567 22080.591 - 22181.415: 97.9386% ( 5) 00:08:24.567 22181.415 - 22282.240: 97.9911% ( 4) 00:08:24.567 22282.240 - 22383.065: 98.0567% ( 5) 00:08:24.567 22383.065 - 22483.889: 98.1092% ( 4) 00:08:24.567 22483.889 - 22584.714: 98.1880% ( 6) 00:08:24.567 22584.714 - 22685.538: 98.2931% ( 8) 00:08:24.567 22685.538 - 22786.363: 98.3981% ( 8) 00:08:24.567 22786.363 - 22887.188: 98.5032% ( 8) 00:08:24.567 22887.188 - 22988.012: 98.5557% ( 4) 00:08:24.567 22988.012 - 23088.837: 98.6082% ( 4) 00:08:24.567 23088.837 - 23189.662: 98.6607% ( 4) 00:08:24.567 23189.662 - 23290.486: 98.7264% ( 5) 00:08:24.567 23290.486 - 23391.311: 98.7789% ( 4) 00:08:24.567 23391.311 - 23492.135: 98.8445% ( 5) 00:08:24.567 23492.135 - 23592.960: 98.8971% ( 4) 00:08:24.567 23592.960 - 23693.785: 98.9496% ( 4) 00:08:24.567 23693.785 - 23794.609: 99.0152% ( 5) 00:08:24.567 23794.609 - 23895.434: 99.0546% ( 3) 00:08:24.567 23895.434 - 23996.258: 99.1071% ( 4) 00:08:24.567 23996.258 - 24097.083: 99.1597% ( 4) 00:08:24.567 30449.034 - 30650.683: 99.1991% ( 3) 00:08:24.567 30650.683 - 30852.332: 99.2910% ( 7) 00:08:24.567 30852.332 - 31053.982: 99.3960% ( 8) 00:08:24.567 31053.982 - 31255.631: 99.4879% ( 7) 00:08:24.567 31255.631 - 31457.280: 99.5930% ( 8) 00:08:24.567 31457.280 - 31658.929: 99.7111% ( 9) 00:08:24.567 31658.929 - 31860.578: 99.8293% ( 9) 00:08:24.567 31860.578 - 32062.228: 99.9475% ( 9) 00:08:24.567 32062.228 - 32263.877: 100.0000% ( 4) 00:08:24.567 00:08:24.567 18:19:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:25.509 Initializing NVMe Controllers 00:08:25.509 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:25.509 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:25.509 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:25.509 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:25.509 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:25.509 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:25.509 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:25.509 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:25.509 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:25.509 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:25.509 Initialization complete. Launching workers. 00:08:25.509 ======================================================== 00:08:25.509 Latency(us) 00:08:25.509 Device Information : IOPS MiB/s Average min max 00:08:25.509 PCIE (0000:00:10.0) NSID 1 from core 0: 10338.12 121.15 12388.85 8445.30 31378.29 00:08:25.509 PCIE (0000:00:11.0) NSID 1 from core 0: 10338.12 121.15 12380.37 8664.78 30540.73 00:08:25.509 PCIE (0000:00:13.0) NSID 1 from core 0: 10338.12 121.15 12369.65 7141.09 30770.42 00:08:25.509 PCIE (0000:00:12.0) NSID 1 from core 0: 10338.12 121.15 12358.37 6438.02 30112.23 00:08:25.509 PCIE (0000:00:12.0) NSID 2 from core 0: 10338.12 121.15 12347.59 5759.42 29581.50 00:08:25.509 PCIE (0000:00:12.0) NSID 3 from core 0: 10401.94 121.90 12261.31 5219.10 23779.17 00:08:25.509 ======================================================== 00:08:25.509 Total : 62092.55 727.65 12350.93 5219.10 31378.29 00:08:25.509 00:08:25.509 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:25.509 ================================================================================= 00:08:25.509 1.00000% : 9023.803us 00:08:25.509 10.00000% : 10132.874us 00:08:25.509 25.00000% : 10889.058us 00:08:25.509 50.00000% : 11947.717us 00:08:25.509 75.00000% : 13308.849us 00:08:25.509 90.00000% : 15022.868us 00:08:25.509 95.00000% : 16333.588us 00:08:25.509 98.00000% : 18148.431us 00:08:25.509 99.00000% : 23794.609us 00:08:25.509 99.50000% : 30247.385us 00:08:25.509 99.90000% : 31053.982us 00:08:25.509 99.99000% : 31457.280us 00:08:25.509 99.99900% : 31457.280us 00:08:25.509 99.99990% : 31457.280us 00:08:25.509 99.99999% : 31457.280us 00:08:25.509 00:08:25.509 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:25.509 ================================================================================= 00:08:25.509 1.00000% : 9023.803us 00:08:25.509 10.00000% : 10284.111us 00:08:25.509 25.00000% : 10889.058us 00:08:25.509 50.00000% : 11846.892us 00:08:25.509 75.00000% : 13308.849us 00:08:25.509 90.00000% : 15123.692us 00:08:25.509 95.00000% : 16434.412us 00:08:25.509 98.00000% : 18148.431us 00:08:25.509 99.00000% : 23290.486us 00:08:25.509 99.50000% : 29844.086us 00:08:25.509 99.90000% : 30449.034us 00:08:25.509 99.99000% : 30650.683us 00:08:25.509 99.99900% : 30650.683us 00:08:25.509 99.99990% : 30650.683us 00:08:25.509 99.99999% : 30650.683us 00:08:25.509 00:08:25.509 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:25.509 ================================================================================= 00:08:25.509 1.00000% : 9074.215us 00:08:25.509 10.00000% : 10183.286us 00:08:25.509 25.00000% : 10939.471us 00:08:25.509 50.00000% : 11897.305us 00:08:25.509 75.00000% : 13308.849us 00:08:25.509 90.00000% : 15123.692us 00:08:25.509 95.00000% : 16434.412us 00:08:25.509 98.00000% : 17745.132us 00:08:25.509 99.00000% : 23895.434us 00:08:25.509 99.50000% : 30045.735us 00:08:25.509 99.90000% : 30650.683us 00:08:25.509 99.99000% : 30852.332us 00:08:25.509 99.99900% : 30852.332us 00:08:25.509 99.99990% : 30852.332us 00:08:25.509 99.99999% : 30852.332us 00:08:25.509 00:08:25.509 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:25.509 ================================================================================= 00:08:25.509 1.00000% : 9175.040us 00:08:25.509 10.00000% : 10183.286us 00:08:25.509 25.00000% : 10989.883us 00:08:25.509 50.00000% : 11897.305us 00:08:25.509 75.00000% : 13208.025us 00:08:25.509 90.00000% : 15022.868us 00:08:25.509 95.00000% : 16333.588us 00:08:25.509 98.00000% : 17644.308us 00:08:25.510 99.00000% : 23492.135us 00:08:25.510 99.50000% : 29239.138us 00:08:25.510 99.90000% : 30045.735us 00:08:25.510 99.99000% : 30247.385us 00:08:25.510 99.99900% : 30247.385us 00:08:25.510 99.99990% : 30247.385us 00:08:25.510 99.99999% : 30247.385us 00:08:25.510 00:08:25.510 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:25.510 ================================================================================= 00:08:25.510 1.00000% : 9124.628us 00:08:25.510 10.00000% : 10183.286us 00:08:25.510 25.00000% : 10889.058us 00:08:25.510 50.00000% : 11897.305us 00:08:25.510 75.00000% : 13208.025us 00:08:25.510 90.00000% : 15224.517us 00:08:25.510 95.00000% : 16232.763us 00:08:25.510 98.00000% : 17845.957us 00:08:25.510 99.00000% : 23088.837us 00:08:25.510 99.50000% : 28835.840us 00:08:25.510 99.90000% : 29440.788us 00:08:25.510 99.99000% : 29642.437us 00:08:25.510 99.99900% : 29642.437us 00:08:25.510 99.99990% : 29642.437us 00:08:25.510 99.99999% : 29642.437us 00:08:25.510 00:08:25.510 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:25.510 ================================================================================= 00:08:25.510 1.00000% : 9023.803us 00:08:25.510 10.00000% : 10233.698us 00:08:25.510 25.00000% : 10939.471us 00:08:25.510 50.00000% : 11897.305us 00:08:25.510 75.00000% : 13208.025us 00:08:25.510 90.00000% : 15022.868us 00:08:25.510 95.00000% : 16131.938us 00:08:25.510 98.00000% : 17140.185us 00:08:25.510 99.00000% : 18450.905us 00:08:25.510 99.50000% : 22887.188us 00:08:25.510 99.90000% : 23693.785us 00:08:25.510 99.99000% : 23794.609us 00:08:25.510 99.99900% : 23794.609us 00:08:25.510 99.99990% : 23794.609us 00:08:25.510 99.99999% : 23794.609us 00:08:25.510 00:08:25.510 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:25.510 ============================================================================== 00:08:25.510 Range in us Cumulative IO count 00:08:25.510 8418.855 - 8469.268: 0.0096% ( 1) 00:08:25.510 8469.268 - 8519.680: 0.0289% ( 2) 00:08:25.510 8519.680 - 8570.092: 0.0482% ( 2) 00:08:25.510 8570.092 - 8620.505: 0.0579% ( 1) 00:08:25.510 8620.505 - 8670.917: 0.0965% ( 4) 00:08:25.510 8670.917 - 8721.329: 0.2411% ( 15) 00:08:25.510 8721.329 - 8771.742: 0.3376% ( 10) 00:08:25.510 8771.742 - 8822.154: 0.4533% ( 12) 00:08:25.510 8822.154 - 8872.566: 0.5883% ( 14) 00:08:25.510 8872.566 - 8922.978: 0.7620% ( 18) 00:08:25.510 8922.978 - 8973.391: 0.9066% ( 15) 00:08:25.510 8973.391 - 9023.803: 1.0031% ( 10) 00:08:25.510 9023.803 - 9074.215: 1.0706% ( 7) 00:08:25.510 9074.215 - 9124.628: 1.2442% ( 18) 00:08:25.510 9124.628 - 9175.040: 1.3696% ( 13) 00:08:25.510 9175.040 - 9225.452: 1.5143% ( 15) 00:08:25.510 9225.452 - 9275.865: 1.7168% ( 21) 00:08:25.510 9275.865 - 9326.277: 1.9097% ( 20) 00:08:25.510 9326.277 - 9376.689: 2.1701% ( 27) 00:08:25.510 9376.689 - 9427.102: 2.5945% ( 44) 00:08:25.510 9427.102 - 9477.514: 2.8549% ( 27) 00:08:25.510 9477.514 - 9527.926: 3.1732% ( 33) 00:08:25.510 9527.926 - 9578.338: 3.5301% ( 37) 00:08:25.510 9578.338 - 9628.751: 3.9062% ( 39) 00:08:25.510 9628.751 - 9679.163: 4.3499% ( 46) 00:08:25.510 9679.163 - 9729.575: 4.9093% ( 58) 00:08:25.510 9729.575 - 9779.988: 5.2758% ( 38) 00:08:25.510 9779.988 - 9830.400: 5.8256% ( 57) 00:08:25.510 9830.400 - 9880.812: 6.4333% ( 63) 00:08:25.510 9880.812 - 9931.225: 7.1277% ( 72) 00:08:25.510 9931.225 - 9981.637: 7.7257% ( 62) 00:08:25.510 9981.637 - 10032.049: 8.5262% ( 83) 00:08:25.510 10032.049 - 10082.462: 9.2689% ( 77) 00:08:25.510 10082.462 - 10132.874: 10.0694% ( 83) 00:08:25.510 10132.874 - 10183.286: 11.0243% ( 99) 00:08:25.510 10183.286 - 10233.698: 11.7670% ( 77) 00:08:25.510 10233.698 - 10284.111: 12.5675% ( 83) 00:08:25.510 10284.111 - 10334.523: 13.3488% ( 81) 00:08:25.510 10334.523 - 10384.935: 14.1879% ( 87) 00:08:25.510 10384.935 - 10435.348: 15.0656% ( 91) 00:08:25.510 10435.348 - 10485.760: 15.8083% ( 77) 00:08:25.510 10485.760 - 10536.172: 16.6860% ( 91) 00:08:25.510 10536.172 - 10586.585: 17.7566% ( 111) 00:08:25.510 10586.585 - 10636.997: 18.8947% ( 118) 00:08:25.510 10636.997 - 10687.409: 20.1100% ( 126) 00:08:25.510 10687.409 - 10737.822: 21.2095% ( 114) 00:08:25.510 10737.822 - 10788.234: 22.6370% ( 148) 00:08:25.510 10788.234 - 10838.646: 23.9969% ( 141) 00:08:25.510 10838.646 - 10889.058: 25.3376% ( 139) 00:08:25.510 10889.058 - 10939.471: 26.5046% ( 121) 00:08:25.510 10939.471 - 10989.883: 27.5367% ( 107) 00:08:25.510 10989.883 - 11040.295: 28.8291% ( 134) 00:08:25.510 11040.295 - 11090.708: 29.8515% ( 106) 00:08:25.510 11090.708 - 11141.120: 31.3850% ( 159) 00:08:25.510 11141.120 - 11191.532: 32.6196% ( 128) 00:08:25.510 11191.532 - 11241.945: 33.6323% ( 105) 00:08:25.510 11241.945 - 11292.357: 35.0405% ( 146) 00:08:25.510 11292.357 - 11342.769: 36.4005% ( 141) 00:08:25.510 11342.769 - 11393.182: 37.6447% ( 129) 00:08:25.510 11393.182 - 11443.594: 38.7249% ( 112) 00:08:25.510 11443.594 - 11494.006: 40.0270% ( 135) 00:08:25.510 11494.006 - 11544.418: 41.0590% ( 107) 00:08:25.510 11544.418 - 11594.831: 42.6312% ( 163) 00:08:25.510 11594.831 - 11645.243: 43.6632% ( 107) 00:08:25.510 11645.243 - 11695.655: 45.2064% ( 160) 00:08:25.510 11695.655 - 11746.068: 46.5760% ( 142) 00:08:25.510 11746.068 - 11796.480: 47.7045% ( 117) 00:08:25.510 11796.480 - 11846.892: 48.9294% ( 127) 00:08:25.510 11846.892 - 11897.305: 49.6817% ( 78) 00:08:25.510 11897.305 - 11947.717: 50.7716% ( 113) 00:08:25.510 11947.717 - 11998.129: 51.7650% ( 103) 00:08:25.510 11998.129 - 12048.542: 52.9610% ( 124) 00:08:25.510 12048.542 - 12098.954: 54.2535% ( 134) 00:08:25.510 12098.954 - 12149.366: 55.7099% ( 151) 00:08:25.510 12149.366 - 12199.778: 56.9059% ( 124) 00:08:25.510 12199.778 - 12250.191: 57.9186% ( 105) 00:08:25.510 12250.191 - 12300.603: 59.0953% ( 122) 00:08:25.510 12300.603 - 12351.015: 60.0984% ( 104) 00:08:25.510 12351.015 - 12401.428: 61.3137% ( 126) 00:08:25.510 12401.428 - 12451.840: 62.3264% ( 105) 00:08:25.510 12451.840 - 12502.252: 63.4742% ( 119) 00:08:25.510 12502.252 - 12552.665: 64.4290% ( 99) 00:08:25.510 12552.665 - 12603.077: 65.5478% ( 116) 00:08:25.510 12603.077 - 12653.489: 66.3387% ( 82) 00:08:25.510 12653.489 - 12703.902: 67.0235% ( 71) 00:08:25.510 12703.902 - 12754.314: 67.7951% ( 80) 00:08:25.510 12754.314 - 12804.726: 68.4799% ( 71) 00:08:25.510 12804.726 - 12855.138: 69.1262% ( 67) 00:08:25.510 12855.138 - 12905.551: 69.7242% ( 62) 00:08:25.510 12905.551 - 13006.375: 71.2770% ( 161) 00:08:25.510 13006.375 - 13107.200: 72.5887% ( 136) 00:08:25.510 13107.200 - 13208.025: 73.9583% ( 142) 00:08:25.510 13208.025 - 13308.849: 75.2122% ( 130) 00:08:25.510 13308.849 - 13409.674: 76.6204% ( 146) 00:08:25.510 13409.674 - 13510.498: 77.5174% ( 93) 00:08:25.510 13510.498 - 13611.323: 78.4529% ( 97) 00:08:25.510 13611.323 - 13712.148: 79.5332% ( 112) 00:08:25.510 13712.148 - 13812.972: 80.6617% ( 117) 00:08:25.510 13812.972 - 13913.797: 81.8866% ( 127) 00:08:25.510 13913.797 - 14014.622: 82.8029% ( 95) 00:08:25.510 14014.622 - 14115.446: 83.6902% ( 92) 00:08:25.510 14115.446 - 14216.271: 84.5679% ( 91) 00:08:25.510 14216.271 - 14317.095: 85.3299% ( 79) 00:08:25.510 14317.095 - 14417.920: 86.1400% ( 84) 00:08:25.510 14417.920 - 14518.745: 87.0467% ( 94) 00:08:25.510 14518.745 - 14619.569: 87.8279% ( 81) 00:08:25.510 14619.569 - 14720.394: 88.5320% ( 73) 00:08:25.510 14720.394 - 14821.218: 89.1782% ( 67) 00:08:25.510 14821.218 - 14922.043: 89.8534% ( 70) 00:08:25.510 14922.043 - 15022.868: 90.4996% ( 67) 00:08:25.510 15022.868 - 15123.692: 91.2037% ( 73) 00:08:25.510 15123.692 - 15224.517: 91.6667% ( 48) 00:08:25.510 15224.517 - 15325.342: 92.1200% ( 47) 00:08:25.510 15325.342 - 15426.166: 92.3708% ( 26) 00:08:25.511 15426.166 - 15526.991: 92.7083% ( 35) 00:08:25.511 15526.991 - 15627.815: 92.8723% ( 17) 00:08:25.511 15627.815 - 15728.640: 93.1231% ( 26) 00:08:25.511 15728.640 - 15829.465: 93.4028% ( 29) 00:08:25.511 15829.465 - 15930.289: 93.6343% ( 24) 00:08:25.511 15930.289 - 16031.114: 94.0104% ( 39) 00:08:25.511 16031.114 - 16131.938: 94.4637% ( 47) 00:08:25.511 16131.938 - 16232.763: 94.7917% ( 34) 00:08:25.511 16232.763 - 16333.588: 95.1678% ( 39) 00:08:25.511 16333.588 - 16434.412: 95.3704% ( 21) 00:08:25.511 16434.412 - 16535.237: 95.6887% ( 33) 00:08:25.511 16535.237 - 16636.062: 95.9780% ( 30) 00:08:25.511 16636.062 - 16736.886: 96.1998% ( 23) 00:08:25.511 16736.886 - 16837.711: 96.4313% ( 24) 00:08:25.511 16837.711 - 16938.535: 96.7496% ( 33) 00:08:25.511 16938.535 - 17039.360: 96.9136% ( 17) 00:08:25.511 17039.360 - 17140.185: 97.0679% ( 16) 00:08:25.511 17140.185 - 17241.009: 97.2704% ( 21) 00:08:25.511 17241.009 - 17341.834: 97.5502% ( 29) 00:08:25.511 17341.834 - 17442.658: 97.7238% ( 18) 00:08:25.511 17442.658 - 17543.483: 97.7816% ( 6) 00:08:25.511 17543.483 - 17644.308: 97.8009% ( 2) 00:08:25.511 17644.308 - 17745.132: 97.8492% ( 5) 00:08:25.511 17745.132 - 17845.957: 97.8877% ( 4) 00:08:25.511 17845.957 - 17946.782: 97.9070% ( 2) 00:08:25.511 17946.782 - 18047.606: 97.9360% ( 3) 00:08:25.511 18047.606 - 18148.431: 98.0131% ( 8) 00:08:25.511 18148.431 - 18249.255: 98.1289% ( 12) 00:08:25.511 18249.255 - 18350.080: 98.2253% ( 10) 00:08:25.511 18350.080 - 18450.905: 98.3121% ( 9) 00:08:25.511 18450.905 - 18551.729: 98.3700% ( 6) 00:08:25.511 18551.729 - 18652.554: 98.4182% ( 5) 00:08:25.511 18652.554 - 18753.378: 98.4954% ( 8) 00:08:25.511 18753.378 - 18854.203: 98.5436% ( 5) 00:08:25.511 18854.203 - 18955.028: 98.5822% ( 4) 00:08:25.511 18955.028 - 19055.852: 98.6304% ( 5) 00:08:25.511 19055.852 - 19156.677: 98.6690% ( 4) 00:08:25.511 19156.677 - 19257.502: 98.7172% ( 5) 00:08:25.511 19257.502 - 19358.326: 98.7654% ( 5) 00:08:25.511 23088.837 - 23189.662: 98.7751% ( 1) 00:08:25.774 23189.662 - 23290.486: 98.8040% ( 3) 00:08:25.774 23290.486 - 23391.311: 98.8908% ( 9) 00:08:25.774 23391.311 - 23492.135: 98.9487% ( 6) 00:08:25.774 23592.960 - 23693.785: 98.9776% ( 3) 00:08:25.774 23693.785 - 23794.609: 99.0355% ( 6) 00:08:25.774 23794.609 - 23895.434: 99.0934% ( 6) 00:08:25.774 23895.434 - 23996.258: 99.1416% ( 5) 00:08:25.774 23996.258 - 24097.083: 99.1802% ( 4) 00:08:25.774 24097.083 - 24197.908: 99.2380% ( 6) 00:08:25.774 24197.908 - 24298.732: 99.2863% ( 5) 00:08:25.774 24298.732 - 24399.557: 99.3441% ( 6) 00:08:25.774 24399.557 - 24500.382: 99.3731% ( 3) 00:08:25.774 24500.382 - 24601.206: 99.3827% ( 1) 00:08:25.774 29239.138 - 29440.788: 99.3924% ( 1) 00:08:25.774 29642.437 - 29844.086: 99.4020% ( 1) 00:08:25.774 29844.086 - 30045.735: 99.4792% ( 8) 00:08:25.774 30045.735 - 30247.385: 99.5756% ( 10) 00:08:25.774 30247.385 - 30449.034: 99.6914% ( 12) 00:08:25.774 30449.034 - 30650.683: 99.7878% ( 10) 00:08:25.774 30650.683 - 30852.332: 99.8939% ( 11) 00:08:25.774 30852.332 - 31053.982: 99.9132% ( 2) 00:08:25.774 31053.982 - 31255.631: 99.9711% ( 6) 00:08:25.774 31255.631 - 31457.280: 100.0000% ( 3) 00:08:25.774 00:08:25.774 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:25.774 ============================================================================== 00:08:25.774 Range in us Cumulative IO count 00:08:25.774 8620.505 - 8670.917: 0.0096% ( 1) 00:08:25.774 8670.917 - 8721.329: 0.0579% ( 5) 00:08:25.774 8721.329 - 8771.742: 0.1640% ( 11) 00:08:25.774 8771.742 - 8822.154: 0.3376% ( 18) 00:08:25.774 8822.154 - 8872.566: 0.5498% ( 22) 00:08:25.774 8872.566 - 8922.978: 0.7909% ( 25) 00:08:25.774 8922.978 - 8973.391: 0.9356% ( 15) 00:08:25.774 8973.391 - 9023.803: 1.1188% ( 19) 00:08:25.774 9023.803 - 9074.215: 1.3021% ( 19) 00:08:25.774 9074.215 - 9124.628: 1.4371% ( 14) 00:08:25.774 9124.628 - 9175.040: 1.6011% ( 17) 00:08:25.774 9175.040 - 9225.452: 1.7650% ( 17) 00:08:25.774 9225.452 - 9275.865: 1.9579% ( 20) 00:08:25.774 9275.865 - 9326.277: 2.1219% ( 17) 00:08:25.774 9326.277 - 9376.689: 2.3245% ( 21) 00:08:25.774 9376.689 - 9427.102: 2.5945% ( 28) 00:08:25.774 9427.102 - 9477.514: 2.7778% ( 19) 00:08:25.774 9477.514 - 9527.926: 2.9803% ( 21) 00:08:25.774 9527.926 - 9578.338: 3.1346% ( 16) 00:08:25.774 9578.338 - 9628.751: 3.3179% ( 19) 00:08:25.774 9628.751 - 9679.163: 3.5590% ( 25) 00:08:25.774 9679.163 - 9729.575: 3.9062% ( 36) 00:08:25.774 9729.575 - 9779.988: 4.3210% ( 43) 00:08:25.774 9779.988 - 9830.400: 4.8515% ( 55) 00:08:25.774 9830.400 - 9880.812: 5.3241% ( 49) 00:08:25.774 9880.812 - 9931.225: 5.7967% ( 49) 00:08:25.774 9931.225 - 9981.637: 6.3561% ( 58) 00:08:25.774 9981.637 - 10032.049: 6.9252% ( 59) 00:08:25.774 10032.049 - 10082.462: 7.6582% ( 76) 00:08:25.774 10082.462 - 10132.874: 8.3237% ( 69) 00:08:25.774 10132.874 - 10183.286: 9.0278% ( 73) 00:08:25.774 10183.286 - 10233.698: 9.8187% ( 82) 00:08:25.774 10233.698 - 10284.111: 10.8025% ( 102) 00:08:25.774 10284.111 - 10334.523: 11.9406% ( 118) 00:08:25.774 10334.523 - 10384.935: 13.0112% ( 111) 00:08:25.774 10384.935 - 10435.348: 14.0818% ( 111) 00:08:25.774 10435.348 - 10485.760: 15.1813% ( 114) 00:08:25.774 10485.760 - 10536.172: 16.4641% ( 133) 00:08:25.774 10536.172 - 10586.585: 17.8434% ( 143) 00:08:25.774 10586.585 - 10636.997: 19.2708% ( 148) 00:08:25.774 10636.997 - 10687.409: 20.6211% ( 140) 00:08:25.774 10687.409 - 10737.822: 21.7785% ( 120) 00:08:25.774 10737.822 - 10788.234: 22.9745% ( 124) 00:08:25.774 10788.234 - 10838.646: 24.1609% ( 123) 00:08:25.774 10838.646 - 10889.058: 25.6173% ( 151) 00:08:25.774 10889.058 - 10939.471: 27.0737% ( 151) 00:08:25.774 10939.471 - 10989.883: 28.4529% ( 143) 00:08:25.774 10989.883 - 11040.295: 29.6200% ( 121) 00:08:25.774 11040.295 - 11090.708: 30.7581% ( 118) 00:08:25.774 11090.708 - 11141.120: 31.9830% ( 127) 00:08:25.774 11141.120 - 11191.532: 32.9958% ( 105) 00:08:25.774 11191.532 - 11241.945: 34.2110% ( 126) 00:08:25.774 11241.945 - 11292.357: 35.5131% ( 135) 00:08:25.774 11292.357 - 11342.769: 36.8634% ( 140) 00:08:25.774 11342.769 - 11393.182: 38.3198% ( 151) 00:08:25.774 11393.182 - 11443.594: 39.7473% ( 148) 00:08:25.774 11443.594 - 11494.006: 41.2423% ( 155) 00:08:25.774 11494.006 - 11544.418: 42.7758% ( 159) 00:08:25.774 11544.418 - 11594.831: 44.3673% ( 165) 00:08:25.774 11594.831 - 11645.243: 45.6308% ( 131) 00:08:25.774 11645.243 - 11695.655: 46.7882% ( 120) 00:08:25.774 11695.655 - 11746.068: 47.9552% ( 121) 00:08:25.774 11746.068 - 11796.480: 49.1416% ( 123) 00:08:25.774 11796.480 - 11846.892: 50.2797% ( 118) 00:08:25.774 11846.892 - 11897.305: 51.0802% ( 83) 00:08:25.774 11897.305 - 11947.717: 51.9001% ( 85) 00:08:25.774 11947.717 - 11998.129: 52.7392% ( 87) 00:08:25.774 11998.129 - 12048.542: 53.6073% ( 90) 00:08:25.774 12048.542 - 12098.954: 54.6103% ( 104) 00:08:25.774 12098.954 - 12149.366: 55.5459% ( 97) 00:08:25.774 12149.366 - 12199.778: 56.3465% ( 83) 00:08:25.774 12199.778 - 12250.191: 57.1663% ( 85) 00:08:25.774 12250.191 - 12300.603: 58.0729% ( 94) 00:08:25.774 12300.603 - 12351.015: 59.1532% ( 112) 00:08:25.774 12351.015 - 12401.428: 60.3106% ( 120) 00:08:25.774 12401.428 - 12451.840: 61.5837% ( 132) 00:08:25.774 12451.840 - 12502.252: 62.8376% ( 130) 00:08:25.774 12502.252 - 12552.665: 63.9757% ( 118) 00:08:25.774 12552.665 - 12603.077: 65.2296% ( 130) 00:08:25.774 12603.077 - 12653.489: 66.3002% ( 111) 00:08:25.774 12653.489 - 12703.902: 67.4865% ( 123) 00:08:25.774 12703.902 - 12754.314: 68.2870% ( 83) 00:08:25.774 12754.314 - 12804.726: 69.0683% ( 81) 00:08:25.774 12804.726 - 12855.138: 69.7531% ( 71) 00:08:25.774 12855.138 - 12905.551: 70.3897% ( 66) 00:08:25.774 12905.551 - 13006.375: 71.9618% ( 163) 00:08:25.774 13006.375 - 13107.200: 73.2446% ( 133) 00:08:25.774 13107.200 - 13208.025: 74.3827% ( 118) 00:08:25.774 13208.025 - 13308.849: 75.3086% ( 96) 00:08:25.774 13308.849 - 13409.674: 76.3985% ( 113) 00:08:25.774 13409.674 - 13510.498: 77.6138% ( 126) 00:08:25.774 13510.498 - 13611.323: 78.9931% ( 143) 00:08:25.774 13611.323 - 13712.148: 80.3144% ( 137) 00:08:25.774 13712.148 - 13812.972: 81.7998% ( 154) 00:08:25.774 13812.972 - 13913.797: 83.2851% ( 154) 00:08:25.774 13913.797 - 14014.622: 84.5197% ( 128) 00:08:25.774 14014.622 - 14115.446: 85.2431% ( 75) 00:08:25.774 14115.446 - 14216.271: 85.7928% ( 57) 00:08:25.774 14216.271 - 14317.095: 86.1690% ( 39) 00:08:25.774 14317.095 - 14417.920: 86.6127% ( 46) 00:08:25.774 14417.920 - 14518.745: 87.3167% ( 73) 00:08:25.774 14518.745 - 14619.569: 87.8954% ( 60) 00:08:25.774 14619.569 - 14720.394: 88.6188% ( 75) 00:08:25.774 14720.394 - 14821.218: 89.0721% ( 47) 00:08:25.774 14821.218 - 14922.043: 89.4676% ( 41) 00:08:25.774 14922.043 - 15022.868: 89.9788% ( 53) 00:08:25.774 15022.868 - 15123.692: 90.5961% ( 64) 00:08:25.774 15123.692 - 15224.517: 91.0494% ( 47) 00:08:25.774 15224.517 - 15325.342: 91.4834% ( 45) 00:08:25.774 15325.342 - 15426.166: 91.8113% ( 34) 00:08:25.774 15426.166 - 15526.991: 92.2164% ( 42) 00:08:25.774 15526.991 - 15627.815: 92.6890% ( 49) 00:08:25.774 15627.815 - 15728.640: 92.9109% ( 23) 00:08:25.774 15728.640 - 15829.465: 93.2292% ( 33) 00:08:25.774 15829.465 - 15930.289: 93.6053% ( 39) 00:08:25.774 15930.289 - 16031.114: 93.7307% ( 13) 00:08:25.774 16031.114 - 16131.938: 93.9043% ( 18) 00:08:25.774 16131.938 - 16232.763: 94.2612% ( 37) 00:08:25.774 16232.763 - 16333.588: 94.6856% ( 44) 00:08:25.774 16333.588 - 16434.412: 95.1871% ( 52) 00:08:25.774 16434.412 - 16535.237: 95.6790% ( 51) 00:08:25.774 16535.237 - 16636.062: 96.0455% ( 38) 00:08:25.774 16636.062 - 16736.886: 96.3831% ( 35) 00:08:25.774 16736.886 - 16837.711: 96.6242% ( 25) 00:08:25.774 16837.711 - 16938.535: 96.7593% ( 14) 00:08:25.774 16938.535 - 17039.360: 96.8943% ( 14) 00:08:25.774 17039.360 - 17140.185: 97.0486% ( 16) 00:08:25.774 17140.185 - 17241.009: 97.1933% ( 15) 00:08:25.774 17241.009 - 17341.834: 97.3283% ( 14) 00:08:25.774 17341.834 - 17442.658: 97.4441% ( 12) 00:08:25.774 17442.658 - 17543.483: 97.5116% ( 7) 00:08:25.774 17543.483 - 17644.308: 97.5984% ( 9) 00:08:25.774 17644.308 - 17745.132: 97.6852% ( 9) 00:08:25.774 17745.132 - 17845.957: 97.7623% ( 8) 00:08:25.774 17845.957 - 17946.782: 97.8588% ( 10) 00:08:25.774 17946.782 - 18047.606: 97.9649% ( 11) 00:08:25.774 18047.606 - 18148.431: 98.0035% ( 4) 00:08:25.774 18148.431 - 18249.255: 98.0613% ( 6) 00:08:25.774 18249.255 - 18350.080: 98.1192% ( 6) 00:08:25.774 18350.080 - 18450.905: 98.2060% ( 9) 00:08:25.774 18450.905 - 18551.729: 98.2542% ( 5) 00:08:25.774 18551.729 - 18652.554: 98.3121% ( 6) 00:08:25.774 18652.554 - 18753.378: 98.3603% ( 5) 00:08:25.774 18753.378 - 18854.203: 98.3989% ( 4) 00:08:25.774 18854.203 - 18955.028: 98.4568% ( 6) 00:08:25.774 18955.028 - 19055.852: 98.5147% ( 6) 00:08:25.774 19055.852 - 19156.677: 98.5629% ( 5) 00:08:25.774 19156.677 - 19257.502: 98.6208% ( 6) 00:08:25.774 19257.502 - 19358.326: 98.6786% ( 6) 00:08:25.774 19358.326 - 19459.151: 98.7365% ( 6) 00:08:25.774 19459.151 - 19559.975: 98.7654% ( 3) 00:08:25.774 22786.363 - 22887.188: 98.7847% ( 2) 00:08:25.774 22887.188 - 22988.012: 98.8426% ( 6) 00:08:25.774 22988.012 - 23088.837: 98.8908% ( 5) 00:08:25.774 23088.837 - 23189.662: 98.9487% ( 6) 00:08:25.774 23189.662 - 23290.486: 99.0066% ( 6) 00:08:25.774 23290.486 - 23391.311: 99.0644% ( 6) 00:08:25.774 23391.311 - 23492.135: 99.1223% ( 6) 00:08:25.774 23492.135 - 23592.960: 99.1802% ( 6) 00:08:25.774 23592.960 - 23693.785: 99.2284% ( 5) 00:08:25.774 23693.785 - 23794.609: 99.2863% ( 6) 00:08:25.774 23794.609 - 23895.434: 99.3345% ( 5) 00:08:25.774 23895.434 - 23996.258: 99.3827% ( 5) 00:08:25.774 29239.138 - 29440.788: 99.4020% ( 2) 00:08:25.774 29440.788 - 29642.437: 99.4888% ( 9) 00:08:25.774 29642.437 - 29844.086: 99.5949% ( 11) 00:08:25.774 29844.086 - 30045.735: 99.7106% ( 12) 00:08:25.774 30045.735 - 30247.385: 99.8264% ( 12) 00:08:25.774 30247.385 - 30449.034: 99.9421% ( 12) 00:08:25.774 30449.034 - 30650.683: 100.0000% ( 6) 00:08:25.774 00:08:25.774 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:25.774 ============================================================================== 00:08:25.774 Range in us Cumulative IO count 00:08:25.774 7108.135 - 7158.548: 0.0193% ( 2) 00:08:25.774 7158.548 - 7208.960: 0.1157% ( 10) 00:08:25.774 7208.960 - 7259.372: 0.1833% ( 7) 00:08:25.774 7259.372 - 7309.785: 0.3954% ( 22) 00:08:25.774 7309.785 - 7360.197: 0.4244% ( 3) 00:08:25.774 7360.197 - 7410.609: 0.4630% ( 4) 00:08:25.774 7410.609 - 7461.022: 0.4919% ( 3) 00:08:25.774 7461.022 - 7511.434: 0.5305% ( 4) 00:08:25.774 7511.434 - 7561.846: 0.5594% ( 3) 00:08:25.774 7561.846 - 7612.258: 0.5883% ( 3) 00:08:25.774 7612.258 - 7662.671: 0.6173% ( 3) 00:08:25.774 8721.329 - 8771.742: 0.6269% ( 1) 00:08:25.774 8822.154 - 8872.566: 0.6462% ( 2) 00:08:25.774 8872.566 - 8922.978: 0.7234% ( 8) 00:08:25.774 8922.978 - 8973.391: 0.8102% ( 9) 00:08:25.774 8973.391 - 9023.803: 0.9259% ( 12) 00:08:25.774 9023.803 - 9074.215: 1.0706% ( 15) 00:08:25.774 9074.215 - 9124.628: 1.2924% ( 23) 00:08:25.774 9124.628 - 9175.040: 1.4660% ( 18) 00:08:25.774 9175.040 - 9225.452: 1.7650% ( 31) 00:08:25.774 9225.452 - 9275.865: 1.9772% ( 22) 00:08:25.774 9275.865 - 9326.277: 2.1412% ( 17) 00:08:25.774 9326.277 - 9376.689: 2.3341% ( 20) 00:08:25.774 9376.689 - 9427.102: 2.5849% ( 26) 00:08:25.774 9427.102 - 9477.514: 2.7874% ( 21) 00:08:25.774 9477.514 - 9527.926: 2.9900% ( 21) 00:08:25.774 9527.926 - 9578.338: 3.3468% ( 37) 00:08:25.774 9578.338 - 9628.751: 3.6073% ( 27) 00:08:25.774 9628.751 - 9679.163: 4.1667% ( 58) 00:08:25.774 9679.163 - 9729.575: 4.7068% ( 56) 00:08:25.774 9729.575 - 9779.988: 5.0733% ( 38) 00:08:25.774 9779.988 - 9830.400: 5.4205% ( 36) 00:08:25.774 9830.400 - 9880.812: 5.8160% ( 41) 00:08:25.774 9880.812 - 9931.225: 6.2886% ( 49) 00:08:25.774 9931.225 - 9981.637: 6.8962% ( 63) 00:08:25.774 9981.637 - 10032.049: 7.4653% ( 59) 00:08:25.774 10032.049 - 10082.462: 8.2272% ( 79) 00:08:25.774 10082.462 - 10132.874: 9.2593% ( 107) 00:08:25.774 10132.874 - 10183.286: 10.0019% ( 77) 00:08:25.774 10183.286 - 10233.698: 10.8025% ( 83) 00:08:25.774 10233.698 - 10284.111: 11.6416% ( 87) 00:08:25.774 10284.111 - 10334.523: 12.5482% ( 94) 00:08:25.774 10334.523 - 10384.935: 13.4934% ( 98) 00:08:25.774 10384.935 - 10435.348: 14.6412% ( 119) 00:08:25.774 10435.348 - 10485.760: 15.6829% ( 108) 00:08:25.774 10485.760 - 10536.172: 16.6570% ( 101) 00:08:25.774 10536.172 - 10586.585: 17.6215% ( 100) 00:08:25.774 10586.585 - 10636.997: 18.5667% ( 98) 00:08:25.774 10636.997 - 10687.409: 19.5409% ( 101) 00:08:25.774 10687.409 - 10737.822: 20.8816% ( 139) 00:08:25.774 10737.822 - 10788.234: 22.3187% ( 149) 00:08:25.774 10788.234 - 10838.646: 23.5243% ( 125) 00:08:25.774 10838.646 - 10889.058: 24.5177% ( 103) 00:08:25.774 10889.058 - 10939.471: 25.4630% ( 98) 00:08:25.774 10939.471 - 10989.883: 26.8036% ( 139) 00:08:25.774 10989.883 - 11040.295: 27.9900% ( 123) 00:08:25.774 11040.295 - 11090.708: 29.1377% ( 119) 00:08:25.774 11090.708 - 11141.120: 30.1698% ( 107) 00:08:25.774 11141.120 - 11191.532: 31.4815% ( 136) 00:08:25.774 11191.532 - 11241.945: 33.1019% ( 168) 00:08:25.774 11241.945 - 11292.357: 34.6740% ( 163) 00:08:25.774 11292.357 - 11342.769: 36.0147% ( 139) 00:08:25.774 11342.769 - 11393.182: 37.1238% ( 115) 00:08:25.774 11393.182 - 11443.594: 38.4549% ( 138) 00:08:25.774 11443.594 - 11494.006: 39.7666% ( 136) 00:08:25.774 11494.006 - 11544.418: 41.1748% ( 146) 00:08:25.774 11544.418 - 11594.831: 42.7276% ( 161) 00:08:25.774 11594.831 - 11645.243: 44.4155% ( 175) 00:08:25.774 11645.243 - 11695.655: 45.8623% ( 150) 00:08:25.774 11695.655 - 11746.068: 47.1258% ( 131) 00:08:25.774 11746.068 - 11796.480: 48.6015% ( 153) 00:08:25.774 11796.480 - 11846.892: 49.8071% ( 125) 00:08:25.774 11846.892 - 11897.305: 51.3889% ( 164) 00:08:25.774 11897.305 - 11947.717: 52.6524% ( 131) 00:08:25.774 11947.717 - 11998.129: 53.5880% ( 97) 00:08:25.774 11998.129 - 12048.542: 54.5332% ( 98) 00:08:25.774 12048.542 - 12098.954: 55.3144% ( 81) 00:08:25.774 12098.954 - 12149.366: 56.3657% ( 109) 00:08:25.774 12149.366 - 12199.778: 57.3206% ( 99) 00:08:25.774 12199.778 - 12250.191: 58.4491% ( 117) 00:08:25.774 12250.191 - 12300.603: 59.4522% ( 104) 00:08:25.774 12300.603 - 12351.015: 60.3781% ( 96) 00:08:25.775 12351.015 - 12401.428: 61.2461% ( 90) 00:08:25.775 12401.428 - 12451.840: 62.1335% ( 92) 00:08:25.775 12451.840 - 12502.252: 62.9437% ( 84) 00:08:25.775 12502.252 - 12552.665: 63.9275% ( 102) 00:08:25.775 12552.665 - 12603.077: 65.0367% ( 115) 00:08:25.775 12603.077 - 12653.489: 66.3194% ( 133) 00:08:25.775 12653.489 - 12703.902: 67.6505% ( 138) 00:08:25.775 12703.902 - 12754.314: 69.0008% ( 140) 00:08:25.775 12754.314 - 12804.726: 69.7820% ( 81) 00:08:25.775 12804.726 - 12855.138: 70.5054% ( 75) 00:08:25.775 12855.138 - 12905.551: 71.3252% ( 85) 00:08:25.775 12905.551 - 13006.375: 72.5116% ( 123) 00:08:25.775 13006.375 - 13107.200: 73.7172% ( 125) 00:08:25.775 13107.200 - 13208.025: 74.9711% ( 130) 00:08:25.775 13208.025 - 13308.849: 75.9934% ( 106) 00:08:25.775 13308.849 - 13409.674: 77.2473% ( 130) 00:08:25.775 13409.674 - 13510.498: 78.4915% ( 129) 00:08:25.775 13510.498 - 13611.323: 79.8225% ( 138) 00:08:25.775 13611.323 - 13712.148: 80.6520% ( 86) 00:08:25.775 13712.148 - 13812.972: 81.4815% ( 86) 00:08:25.775 13812.972 - 13913.797: 82.4171% ( 97) 00:08:25.775 13913.797 - 14014.622: 83.6420% ( 127) 00:08:25.775 14014.622 - 14115.446: 85.0887% ( 150) 00:08:25.775 14115.446 - 14216.271: 86.3137% ( 127) 00:08:25.775 14216.271 - 14317.095: 87.5772% ( 131) 00:08:25.775 14317.095 - 14417.920: 88.4645% ( 92) 00:08:25.775 14417.920 - 14518.745: 88.9950% ( 55) 00:08:25.775 14518.745 - 14619.569: 89.3326% ( 35) 00:08:25.775 14619.569 - 14720.394: 89.5737% ( 25) 00:08:25.775 14720.394 - 14821.218: 89.6991% ( 13) 00:08:25.775 14821.218 - 14922.043: 89.7762% ( 8) 00:08:25.775 14922.043 - 15022.868: 89.9306% ( 16) 00:08:25.775 15022.868 - 15123.692: 90.0945% ( 17) 00:08:25.775 15123.692 - 15224.517: 90.2971% ( 21) 00:08:25.775 15224.517 - 15325.342: 90.6443% ( 36) 00:08:25.775 15325.342 - 15426.166: 90.9626% ( 33) 00:08:25.775 15426.166 - 15526.991: 91.5992% ( 66) 00:08:25.775 15526.991 - 15627.815: 92.0235% ( 44) 00:08:25.775 15627.815 - 15728.640: 92.3804% ( 37) 00:08:25.775 15728.640 - 15829.465: 92.6022% ( 23) 00:08:25.775 15829.465 - 15930.289: 92.9495% ( 36) 00:08:25.775 15930.289 - 16031.114: 93.3931% ( 46) 00:08:25.775 16031.114 - 16131.938: 93.7789% ( 40) 00:08:25.775 16131.938 - 16232.763: 94.3576% ( 60) 00:08:25.775 16232.763 - 16333.588: 94.7820% ( 44) 00:08:25.775 16333.588 - 16434.412: 95.1292% ( 36) 00:08:25.775 16434.412 - 16535.237: 95.5247% ( 41) 00:08:25.775 16535.237 - 16636.062: 95.9684% ( 46) 00:08:25.775 16636.062 - 16736.886: 96.2191% ( 26) 00:08:25.775 16736.886 - 16837.711: 96.4217% ( 21) 00:08:25.775 16837.711 - 16938.535: 96.6242% ( 21) 00:08:25.775 16938.535 - 17039.360: 96.8654% ( 25) 00:08:25.775 17039.360 - 17140.185: 97.1258% ( 27) 00:08:25.775 17140.185 - 17241.009: 97.2801% ( 16) 00:08:25.775 17241.009 - 17341.834: 97.4923% ( 22) 00:08:25.775 17341.834 - 17442.658: 97.7334% ( 25) 00:08:25.775 17442.658 - 17543.483: 97.8684% ( 14) 00:08:25.775 17543.483 - 17644.308: 97.9649% ( 10) 00:08:25.775 17644.308 - 17745.132: 98.0421% ( 8) 00:08:25.775 17745.132 - 17845.957: 98.1096% ( 7) 00:08:25.775 17845.957 - 17946.782: 98.1385% ( 3) 00:08:25.775 17946.782 - 18047.606: 98.1481% ( 1) 00:08:25.775 18047.606 - 18148.431: 98.1578% ( 1) 00:08:25.775 18148.431 - 18249.255: 98.1867% ( 3) 00:08:25.775 18249.255 - 18350.080: 98.2157% ( 3) 00:08:25.775 18350.080 - 18450.905: 98.2735% ( 6) 00:08:25.775 18450.905 - 18551.729: 98.3314% ( 6) 00:08:25.775 18551.729 - 18652.554: 98.3893% ( 6) 00:08:25.775 18652.554 - 18753.378: 98.4375% ( 5) 00:08:25.775 18753.378 - 18854.203: 98.4954% ( 6) 00:08:25.775 18854.203 - 18955.028: 98.5532% ( 6) 00:08:25.775 18955.028 - 19055.852: 98.6111% ( 6) 00:08:25.775 19055.852 - 19156.677: 98.6690% ( 6) 00:08:25.775 19156.677 - 19257.502: 98.7269% ( 6) 00:08:25.775 19257.502 - 19358.326: 98.7654% ( 4) 00:08:25.775 23290.486 - 23391.311: 98.7944% ( 3) 00:08:25.775 23391.311 - 23492.135: 98.8329% ( 4) 00:08:25.775 23492.135 - 23592.960: 98.8812% ( 5) 00:08:25.775 23592.960 - 23693.785: 98.9294% ( 5) 00:08:25.775 23693.785 - 23794.609: 98.9680% ( 4) 00:08:25.775 23794.609 - 23895.434: 99.0066% ( 4) 00:08:25.775 23895.434 - 23996.258: 99.0451% ( 4) 00:08:25.775 23996.258 - 24097.083: 99.0837% ( 4) 00:08:25.775 24097.083 - 24197.908: 99.1319% ( 5) 00:08:25.775 24197.908 - 24298.732: 99.1705% ( 4) 00:08:25.775 24298.732 - 24399.557: 99.2284% ( 6) 00:08:25.775 24399.557 - 24500.382: 99.2766% ( 5) 00:08:25.775 24500.382 - 24601.206: 99.3345% ( 6) 00:08:25.775 24601.206 - 24702.031: 99.3827% ( 5) 00:08:25.775 29642.437 - 29844.086: 99.4888% ( 11) 00:08:25.775 29844.086 - 30045.735: 99.5853% ( 10) 00:08:25.775 30045.735 - 30247.385: 99.7010% ( 12) 00:08:25.775 30247.385 - 30449.034: 99.8167% ( 12) 00:08:25.775 30449.034 - 30650.683: 99.9325% ( 12) 00:08:25.775 30650.683 - 30852.332: 100.0000% ( 7) 00:08:25.775 00:08:25.775 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:25.775 ============================================================================== 00:08:25.775 Range in us Cumulative IO count 00:08:25.775 6427.569 - 6452.775: 0.0096% ( 1) 00:08:25.775 6452.775 - 6503.188: 0.1350% ( 13) 00:08:25.775 6503.188 - 6553.600: 0.2990% ( 17) 00:08:25.775 6553.600 - 6604.012: 0.4051% ( 11) 00:08:25.775 6604.012 - 6654.425: 0.4340% ( 3) 00:08:25.775 6654.425 - 6704.837: 0.4630% ( 3) 00:08:25.775 6704.837 - 6755.249: 0.5015% ( 4) 00:08:25.775 6755.249 - 6805.662: 0.5305% ( 3) 00:08:25.775 6805.662 - 6856.074: 0.5691% ( 4) 00:08:25.775 6856.074 - 6906.486: 0.5980% ( 3) 00:08:25.775 6906.486 - 6956.898: 0.6173% ( 2) 00:08:25.775 8721.329 - 8771.742: 0.6269% ( 1) 00:08:25.775 8771.742 - 8822.154: 0.6462% ( 2) 00:08:25.775 8822.154 - 8872.566: 0.6944% ( 5) 00:08:25.775 8872.566 - 8922.978: 0.7234% ( 3) 00:08:25.775 8922.978 - 8973.391: 0.7620% ( 4) 00:08:25.775 8973.391 - 9023.803: 0.8005% ( 4) 00:08:25.775 9023.803 - 9074.215: 0.8584% ( 6) 00:08:25.775 9074.215 - 9124.628: 0.9934% ( 14) 00:08:25.775 9124.628 - 9175.040: 1.2442% ( 26) 00:08:25.775 9175.040 - 9225.452: 1.3985% ( 16) 00:08:25.775 9225.452 - 9275.865: 1.5143% ( 12) 00:08:25.775 9275.865 - 9326.277: 1.7747% ( 27) 00:08:25.775 9326.277 - 9376.689: 2.0833% ( 32) 00:08:25.775 9376.689 - 9427.102: 2.3148% ( 24) 00:08:25.775 9427.102 - 9477.514: 2.6235% ( 32) 00:08:25.775 9477.514 - 9527.926: 2.9417% ( 33) 00:08:25.775 9527.926 - 9578.338: 3.2504% ( 32) 00:08:25.775 9578.338 - 9628.751: 3.6265% ( 39) 00:08:25.775 9628.751 - 9679.163: 4.1184% ( 51) 00:08:25.775 9679.163 - 9729.575: 4.5332% ( 43) 00:08:25.775 9729.575 - 9779.988: 5.0540% ( 54) 00:08:25.775 9779.988 - 9830.400: 5.6327% ( 60) 00:08:25.775 9830.400 - 9880.812: 6.1439% ( 53) 00:08:25.775 9880.812 - 9931.225: 6.6744% ( 55) 00:08:25.775 9931.225 - 9981.637: 7.2627% ( 61) 00:08:25.775 9981.637 - 10032.049: 7.9282% ( 69) 00:08:25.775 10032.049 - 10082.462: 8.7384% ( 84) 00:08:25.775 10082.462 - 10132.874: 9.5583% ( 85) 00:08:25.775 10132.874 - 10183.286: 10.6192% ( 110) 00:08:25.775 10183.286 - 10233.698: 11.5066% ( 92) 00:08:25.775 10233.698 - 10284.111: 12.2396% ( 76) 00:08:25.775 10284.111 - 10334.523: 13.1848% ( 98) 00:08:25.775 10334.523 - 10384.935: 14.0336% ( 88) 00:08:25.775 10384.935 - 10435.348: 14.9691% ( 97) 00:08:25.775 10435.348 - 10485.760: 15.7311% ( 79) 00:08:25.775 10485.760 - 10536.172: 16.8403% ( 115) 00:08:25.775 10536.172 - 10586.585: 17.7276% ( 92) 00:08:25.775 10586.585 - 10636.997: 18.5282% ( 83) 00:08:25.775 10636.997 - 10687.409: 19.3287% ( 83) 00:08:25.775 10687.409 - 10737.822: 20.3800% ( 109) 00:08:25.775 10737.822 - 10788.234: 21.3445% ( 100) 00:08:25.775 10788.234 - 10838.646: 22.3958% ( 109) 00:08:25.775 10838.646 - 10889.058: 23.3507% ( 99) 00:08:25.775 10889.058 - 10939.471: 24.4020% ( 109) 00:08:25.775 10939.471 - 10989.883: 25.7716% ( 142) 00:08:25.775 10989.883 - 11040.295: 27.1605% ( 144) 00:08:25.775 11040.295 - 11090.708: 28.3083% ( 119) 00:08:25.775 11090.708 - 11141.120: 29.4174% ( 115) 00:08:25.775 11141.120 - 11191.532: 30.6520% ( 128) 00:08:25.775 11191.532 - 11241.945: 32.1663% ( 157) 00:08:25.775 11241.945 - 11292.357: 33.4008% ( 128) 00:08:25.775 11292.357 - 11342.769: 34.7704% ( 142) 00:08:25.775 11342.769 - 11393.182: 35.9761% ( 125) 00:08:25.775 11393.182 - 11443.594: 37.5386% ( 162) 00:08:25.775 11443.594 - 11494.006: 38.9178% ( 143) 00:08:25.775 11494.006 - 11544.418: 40.2681% ( 140) 00:08:25.775 11544.418 - 11594.831: 41.8403% ( 163) 00:08:25.775 11594.831 - 11645.243: 43.1713% ( 138) 00:08:25.775 11645.243 - 11695.655: 44.6277% ( 151) 00:08:25.775 11695.655 - 11746.068: 45.8140% ( 123) 00:08:25.775 11746.068 - 11796.480: 47.1547% ( 139) 00:08:25.775 11796.480 - 11846.892: 48.8137% ( 172) 00:08:25.775 11846.892 - 11897.305: 50.3954% ( 164) 00:08:25.775 11897.305 - 11947.717: 52.2184% ( 189) 00:08:25.775 11947.717 - 11998.129: 53.4819% ( 131) 00:08:25.775 11998.129 - 12048.542: 55.1890% ( 177) 00:08:25.775 12048.542 - 12098.954: 56.5490% ( 141) 00:08:25.775 12098.954 - 12149.366: 57.6196% ( 111) 00:08:25.775 12149.366 - 12199.778: 58.5841% ( 100) 00:08:25.775 12199.778 - 12250.191: 59.4425% ( 89) 00:08:25.775 12250.191 - 12300.603: 60.2238% ( 81) 00:08:25.775 12300.603 - 12351.015: 61.1111% ( 92) 00:08:25.775 12351.015 - 12401.428: 61.7959% ( 71) 00:08:25.775 12401.428 - 12451.840: 62.5289% ( 76) 00:08:25.775 12451.840 - 12502.252: 63.2812% ( 78) 00:08:25.775 12502.252 - 12552.665: 64.1782% ( 93) 00:08:25.775 12552.665 - 12603.077: 65.2488% ( 111) 00:08:25.775 12603.077 - 12653.489: 66.5027% ( 130) 00:08:25.775 12653.489 - 12703.902: 67.4286% ( 96) 00:08:25.775 12703.902 - 12754.314: 68.4414% ( 105) 00:08:25.775 12754.314 - 12804.726: 69.5216% ( 112) 00:08:25.775 12804.726 - 12855.138: 70.6983% ( 122) 00:08:25.775 12855.138 - 12905.551: 71.6725% ( 101) 00:08:25.775 12905.551 - 13006.375: 73.4954% ( 189) 00:08:25.775 13006.375 - 13107.200: 74.7782% ( 133) 00:08:25.775 13107.200 - 13208.025: 75.7716% ( 103) 00:08:25.775 13208.025 - 13308.849: 76.7843% ( 105) 00:08:25.775 13308.849 - 13409.674: 78.2697% ( 154) 00:08:25.775 13409.674 - 13510.498: 79.2438% ( 101) 00:08:25.775 13510.498 - 13611.323: 79.9961% ( 78) 00:08:25.775 13611.323 - 13712.148: 80.6713% ( 70) 00:08:25.775 13712.148 - 13812.972: 81.4333% ( 79) 00:08:25.775 13812.972 - 13913.797: 82.3785% ( 98) 00:08:25.775 13913.797 - 14014.622: 83.3430% ( 100) 00:08:25.775 14014.622 - 14115.446: 84.1339% ( 82) 00:08:25.775 14115.446 - 14216.271: 84.9441% ( 84) 00:08:25.775 14216.271 - 14317.095: 85.8507% ( 94) 00:08:25.775 14317.095 - 14417.920: 86.6609% ( 84) 00:08:25.775 14417.920 - 14518.745: 87.2106% ( 57) 00:08:25.775 14518.745 - 14619.569: 87.9147% ( 73) 00:08:25.775 14619.569 - 14720.394: 88.7056% ( 82) 00:08:25.775 14720.394 - 14821.218: 89.2168% ( 53) 00:08:25.775 14821.218 - 14922.043: 89.6894% ( 49) 00:08:25.775 14922.043 - 15022.868: 90.2392% ( 57) 00:08:25.775 15022.868 - 15123.692: 90.9433% ( 73) 00:08:25.775 15123.692 - 15224.517: 91.3484% ( 42) 00:08:25.775 15224.517 - 15325.342: 91.5992% ( 26) 00:08:25.775 15325.342 - 15426.166: 91.8113% ( 22) 00:08:25.775 15426.166 - 15526.991: 91.9560% ( 15) 00:08:25.775 15526.991 - 15627.815: 92.1682% ( 22) 00:08:25.775 15627.815 - 15728.640: 92.5154% ( 36) 00:08:25.775 15728.640 - 15829.465: 92.9109% ( 41) 00:08:25.775 15829.465 - 15930.289: 93.2967% ( 40) 00:08:25.775 15930.289 - 16031.114: 94.1262% ( 86) 00:08:25.775 16031.114 - 16131.938: 94.5216% ( 41) 00:08:25.775 16131.938 - 16232.763: 94.8399% ( 33) 00:08:25.775 16232.763 - 16333.588: 95.1775% ( 35) 00:08:25.775 16333.588 - 16434.412: 95.4090% ( 24) 00:08:25.775 16434.412 - 16535.237: 95.6308% ( 23) 00:08:25.775 16535.237 - 16636.062: 95.8333% ( 21) 00:08:25.775 16636.062 - 16736.886: 95.9973% ( 17) 00:08:25.775 16736.886 - 16837.711: 96.1613% ( 17) 00:08:25.775 16837.711 - 16938.535: 96.3927% ( 24) 00:08:25.775 16938.535 - 17039.360: 96.6435% ( 26) 00:08:25.775 17039.360 - 17140.185: 96.8364% ( 20) 00:08:25.775 17140.185 - 17241.009: 97.0872% ( 26) 00:08:25.775 17241.009 - 17341.834: 97.3090% ( 23) 00:08:25.775 17341.834 - 17442.658: 97.5405% ( 24) 00:08:25.775 17442.658 - 17543.483: 97.8106% ( 28) 00:08:25.775 17543.483 - 17644.308: 98.0903% ( 29) 00:08:25.775 17644.308 - 17745.132: 98.2928% ( 21) 00:08:25.775 17745.132 - 17845.957: 98.4761% ( 19) 00:08:25.775 17845.957 - 17946.782: 98.6208% ( 15) 00:08:25.775 17946.782 - 18047.606: 98.6786% ( 6) 00:08:25.775 18047.606 - 18148.431: 98.7365% ( 6) 00:08:25.775 18148.431 - 18249.255: 98.7654% ( 3) 00:08:25.775 22887.188 - 22988.012: 98.7751% ( 1) 00:08:25.775 22988.012 - 23088.837: 98.8329% ( 6) 00:08:25.775 23088.837 - 23189.662: 98.8812% ( 5) 00:08:25.775 23189.662 - 23290.486: 98.9294% ( 5) 00:08:25.775 23290.486 - 23391.311: 98.9776% ( 5) 00:08:25.775 23391.311 - 23492.135: 99.0258% ( 5) 00:08:25.775 23492.135 - 23592.960: 99.0837% ( 6) 00:08:25.775 23592.960 - 23693.785: 99.1416% ( 6) 00:08:25.775 23693.785 - 23794.609: 99.1995% ( 6) 00:08:25.775 23794.609 - 23895.434: 99.2573% ( 6) 00:08:25.775 23895.434 - 23996.258: 99.3056% ( 5) 00:08:25.775 23996.258 - 24097.083: 99.3634% ( 6) 00:08:25.775 24097.083 - 24197.908: 99.3827% ( 2) 00:08:25.775 28835.840 - 29037.489: 99.4117% ( 3) 00:08:25.775 29037.489 - 29239.138: 99.5274% ( 12) 00:08:25.775 29239.138 - 29440.788: 99.6431% ( 12) 00:08:25.775 29440.788 - 29642.437: 99.7492% ( 11) 00:08:25.775 29642.437 - 29844.086: 99.8650% ( 12) 00:08:25.775 29844.086 - 30045.735: 99.9614% ( 10) 00:08:25.775 30045.735 - 30247.385: 100.0000% ( 4) 00:08:25.775 00:08:25.775 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:25.775 ============================================================================== 00:08:25.775 Range in us Cumulative IO count 00:08:25.775 5747.003 - 5772.209: 0.0096% ( 1) 00:08:25.775 5772.209 - 5797.415: 0.0386% ( 3) 00:08:25.775 5797.415 - 5822.622: 0.0675% ( 3) 00:08:25.775 5822.622 - 5847.828: 0.1061% ( 4) 00:08:25.775 5847.828 - 5873.034: 0.1736% ( 7) 00:08:25.775 5873.034 - 5898.240: 0.2315% ( 6) 00:08:25.775 5898.240 - 5923.446: 0.3472% ( 12) 00:08:25.775 5923.446 - 5948.652: 0.4244% ( 8) 00:08:25.775 5948.652 - 5973.858: 0.4533% ( 3) 00:08:25.775 5973.858 - 5999.065: 0.4726% ( 2) 00:08:25.775 5999.065 - 6024.271: 0.4823% ( 1) 00:08:25.775 6024.271 - 6049.477: 0.4919% ( 1) 00:08:25.775 6049.477 - 6074.683: 0.5112% ( 2) 00:08:25.775 6074.683 - 6099.889: 0.5305% ( 2) 00:08:25.775 6099.889 - 6125.095: 0.5401% ( 1) 00:08:25.775 6125.095 - 6150.302: 0.5594% ( 2) 00:08:25.775 6150.302 - 6175.508: 0.5691% ( 1) 00:08:25.775 6175.508 - 6200.714: 0.5883% ( 2) 00:08:25.775 6200.714 - 6225.920: 0.6076% ( 2) 00:08:25.775 6225.920 - 6251.126: 0.6173% ( 1) 00:08:25.775 8721.329 - 8771.742: 0.6269% ( 1) 00:08:25.775 8872.566 - 8922.978: 0.6366% ( 1) 00:08:25.775 8922.978 - 8973.391: 0.6655% ( 3) 00:08:25.775 8973.391 - 9023.803: 0.7427% ( 8) 00:08:25.775 9023.803 - 9074.215: 0.8777% ( 14) 00:08:25.775 9074.215 - 9124.628: 1.0513% ( 18) 00:08:25.775 9124.628 - 9175.040: 1.2731% ( 23) 00:08:25.775 9175.040 - 9225.452: 1.6204% ( 36) 00:08:25.775 9225.452 - 9275.865: 1.8519% ( 24) 00:08:25.775 9275.865 - 9326.277: 2.1508% ( 31) 00:08:25.775 9326.277 - 9376.689: 2.3245% ( 18) 00:08:25.775 9376.689 - 9427.102: 2.5367% ( 22) 00:08:25.775 9427.102 - 9477.514: 2.7296% ( 20) 00:08:25.775 9477.514 - 9527.926: 2.9514% ( 23) 00:08:25.775 9527.926 - 9578.338: 3.3661% ( 43) 00:08:25.775 9578.338 - 9628.751: 3.5976% ( 24) 00:08:25.775 9628.751 - 9679.163: 3.8291% ( 24) 00:08:25.776 9679.163 - 9729.575: 4.2535% ( 44) 00:08:25.776 9729.575 - 9779.988: 4.6971% ( 46) 00:08:25.776 9779.988 - 9830.400: 5.2855% ( 61) 00:08:25.776 9830.400 - 9880.812: 5.9124% ( 65) 00:08:25.776 9880.812 - 9931.225: 6.5876% ( 70) 00:08:25.776 9931.225 - 9981.637: 7.3495% ( 79) 00:08:25.776 9981.637 - 10032.049: 8.1404% ( 82) 00:08:25.776 10032.049 - 10082.462: 8.9313% ( 82) 00:08:25.776 10082.462 - 10132.874: 9.7994% ( 90) 00:08:25.776 10132.874 - 10183.286: 10.4552% ( 68) 00:08:25.776 10183.286 - 10233.698: 11.0147% ( 58) 00:08:25.776 10233.698 - 10284.111: 11.7573% ( 77) 00:08:25.776 10284.111 - 10334.523: 12.5965% ( 87) 00:08:25.776 10334.523 - 10384.935: 13.2427% ( 67) 00:08:25.776 10384.935 - 10435.348: 14.0336% ( 82) 00:08:25.776 10435.348 - 10485.760: 15.1717% ( 118) 00:08:25.776 10485.760 - 10536.172: 16.0012% ( 86) 00:08:25.776 10536.172 - 10586.585: 16.9560% ( 99) 00:08:25.776 10586.585 - 10636.997: 18.1038% ( 119) 00:08:25.776 10636.997 - 10687.409: 19.6952% ( 165) 00:08:25.776 10687.409 - 10737.822: 21.4217% ( 179) 00:08:25.776 10737.822 - 10788.234: 22.7816% ( 141) 00:08:25.776 10788.234 - 10838.646: 23.9873% ( 125) 00:08:25.776 10838.646 - 10889.058: 25.0193% ( 107) 00:08:25.776 10889.058 - 10939.471: 26.0995% ( 112) 00:08:25.776 10939.471 - 10989.883: 27.0255% ( 96) 00:08:25.776 10989.883 - 11040.295: 28.0575% ( 107) 00:08:25.776 11040.295 - 11090.708: 29.3017% ( 129) 00:08:25.776 11090.708 - 11141.120: 30.2180% ( 95) 00:08:25.776 11141.120 - 11191.532: 31.2886% ( 111) 00:08:25.776 11191.532 - 11241.945: 32.7064% ( 147) 00:08:25.776 11241.945 - 11292.357: 33.9603% ( 130) 00:08:25.776 11292.357 - 11342.769: 35.1948% ( 128) 00:08:25.776 11342.769 - 11393.182: 36.4101% ( 126) 00:08:25.776 11393.182 - 11443.594: 38.0498% ( 170) 00:08:25.776 11443.594 - 11494.006: 39.5351% ( 154) 00:08:25.776 11494.006 - 11544.418: 40.8275% ( 134) 00:08:25.776 11544.418 - 11594.831: 42.3032% ( 153) 00:08:25.776 11594.831 - 11645.243: 43.5089% ( 125) 00:08:25.776 11645.243 - 11695.655: 45.0907% ( 164) 00:08:25.776 11695.655 - 11746.068: 46.3349% ( 129) 00:08:25.776 11746.068 - 11796.480: 47.6273% ( 134) 00:08:25.776 11796.480 - 11846.892: 48.9776% ( 140) 00:08:25.776 11846.892 - 11897.305: 50.2411% ( 131) 00:08:25.776 11897.305 - 11947.717: 51.4950% ( 130) 00:08:25.776 11947.717 - 11998.129: 52.6813% ( 123) 00:08:25.776 11998.129 - 12048.542: 53.9641% ( 133) 00:08:25.776 12048.542 - 12098.954: 55.2276% ( 131) 00:08:25.776 12098.954 - 12149.366: 56.5876% ( 141) 00:08:25.776 12149.366 - 12199.778: 57.8607% ( 132) 00:08:25.776 12199.778 - 12250.191: 59.2014% ( 139) 00:08:25.776 12250.191 - 12300.603: 60.2045% ( 104) 00:08:25.776 12300.603 - 12351.015: 61.4198% ( 126) 00:08:25.776 12351.015 - 12401.428: 62.5000% ( 112) 00:08:25.776 12401.428 - 12451.840: 63.5802% ( 112) 00:08:25.776 12451.840 - 12502.252: 64.5255% ( 98) 00:08:25.776 12502.252 - 12552.665: 65.3839% ( 89) 00:08:25.776 12552.665 - 12603.077: 66.1941% ( 84) 00:08:25.776 12603.077 - 12653.489: 66.9850% ( 82) 00:08:25.776 12653.489 - 12703.902: 67.6119% ( 65) 00:08:25.776 12703.902 - 12754.314: 68.3160% ( 73) 00:08:25.776 12754.314 - 12804.726: 69.0876% ( 80) 00:08:25.776 12804.726 - 12855.138: 69.9942% ( 94) 00:08:25.776 12855.138 - 12905.551: 70.9877% ( 103) 00:08:25.776 12905.551 - 13006.375: 72.4151% ( 148) 00:08:25.776 13006.375 - 13107.200: 73.9390% ( 158) 00:08:25.776 13107.200 - 13208.025: 75.0289% ( 113) 00:08:25.776 13208.025 - 13308.849: 76.1863% ( 120) 00:08:25.776 13308.849 - 13409.674: 76.9579% ( 80) 00:08:25.776 13409.674 - 13510.498: 77.9417% ( 102) 00:08:25.776 13510.498 - 13611.323: 79.1474% ( 125) 00:08:25.776 13611.323 - 13712.148: 80.3916% ( 129) 00:08:25.776 13712.148 - 13812.972: 81.5972% ( 125) 00:08:25.776 13812.972 - 13913.797: 82.5521% ( 99) 00:08:25.776 13913.797 - 14014.622: 83.6902% ( 118) 00:08:25.776 14014.622 - 14115.446: 84.6740% ( 102) 00:08:25.776 14115.446 - 14216.271: 85.3395% ( 69) 00:08:25.776 14216.271 - 14317.095: 85.9954% ( 68) 00:08:25.776 14317.095 - 14417.920: 86.5741% ( 60) 00:08:25.776 14417.920 - 14518.745: 87.0563% ( 50) 00:08:25.776 14518.745 - 14619.569: 87.6157% ( 58) 00:08:25.776 14619.569 - 14720.394: 88.1944% ( 60) 00:08:25.776 14720.394 - 14821.218: 88.5224% ( 34) 00:08:25.776 14821.218 - 14922.043: 88.8696% ( 36) 00:08:25.776 14922.043 - 15022.868: 89.2361% ( 38) 00:08:25.776 15022.868 - 15123.692: 89.7377% ( 52) 00:08:25.776 15123.692 - 15224.517: 90.5189% ( 81) 00:08:25.776 15224.517 - 15325.342: 91.0108% ( 51) 00:08:25.776 15325.342 - 15426.166: 91.7535% ( 77) 00:08:25.776 15426.166 - 15526.991: 92.3418% ( 61) 00:08:25.776 15526.991 - 15627.815: 92.8627% ( 54) 00:08:25.776 15627.815 - 15728.640: 93.2774% ( 43) 00:08:25.776 15728.640 - 15829.465: 93.8272% ( 57) 00:08:25.776 15829.465 - 15930.289: 94.1840% ( 37) 00:08:25.776 15930.289 - 16031.114: 94.5409% ( 37) 00:08:25.776 16031.114 - 16131.938: 94.7820% ( 25) 00:08:25.776 16131.938 - 16232.763: 95.2353% ( 47) 00:08:25.776 16232.763 - 16333.588: 95.5247% ( 30) 00:08:25.776 16333.588 - 16434.412: 95.8719% ( 36) 00:08:25.776 16434.412 - 16535.237: 96.1613% ( 30) 00:08:25.776 16535.237 - 16636.062: 96.4313% ( 28) 00:08:25.776 16636.062 - 16736.886: 96.6725% ( 25) 00:08:25.776 16736.886 - 16837.711: 96.8557% ( 19) 00:08:25.776 16837.711 - 16938.535: 97.0293% ( 18) 00:08:25.776 16938.535 - 17039.360: 97.1451% ( 12) 00:08:25.776 17039.360 - 17140.185: 97.2415% ( 10) 00:08:25.776 17140.185 - 17241.009: 97.3283% ( 9) 00:08:25.776 17241.009 - 17341.834: 97.3958% ( 7) 00:08:25.776 17341.834 - 17442.658: 97.4826% ( 9) 00:08:25.776 17442.658 - 17543.483: 97.6177% ( 14) 00:08:25.776 17543.483 - 17644.308: 97.7623% ( 15) 00:08:25.776 17644.308 - 17745.132: 97.9938% ( 24) 00:08:25.776 17745.132 - 17845.957: 98.1771% ( 19) 00:08:25.776 17845.957 - 17946.782: 98.2639% ( 9) 00:08:25.776 17946.782 - 18047.606: 98.4375% ( 18) 00:08:25.776 18047.606 - 18148.431: 98.6208% ( 19) 00:08:25.776 18148.431 - 18249.255: 98.6690% ( 5) 00:08:25.776 18249.255 - 18350.080: 98.7076% ( 4) 00:08:25.776 18350.080 - 18450.905: 98.7558% ( 5) 00:08:25.776 18450.905 - 18551.729: 98.7654% ( 1) 00:08:25.776 22483.889 - 22584.714: 98.7751% ( 1) 00:08:25.776 22584.714 - 22685.538: 98.8040% ( 3) 00:08:25.776 22685.538 - 22786.363: 98.8522% ( 5) 00:08:25.776 22786.363 - 22887.188: 98.9005% ( 5) 00:08:25.776 22887.188 - 22988.012: 98.9583% ( 6) 00:08:25.776 22988.012 - 23088.837: 99.0066% ( 5) 00:08:25.776 23088.837 - 23189.662: 99.0548% ( 5) 00:08:25.776 23189.662 - 23290.486: 99.1030% ( 5) 00:08:25.776 23290.486 - 23391.311: 99.1609% ( 6) 00:08:25.776 23391.311 - 23492.135: 99.1995% ( 4) 00:08:25.776 23492.135 - 23592.960: 99.2573% ( 6) 00:08:25.776 23592.960 - 23693.785: 99.3056% ( 5) 00:08:25.776 23693.785 - 23794.609: 99.3538% ( 5) 00:08:25.776 23794.609 - 23895.434: 99.3827% ( 3) 00:08:25.776 28432.542 - 28634.191: 99.4695% ( 9) 00:08:25.776 28634.191 - 28835.840: 99.5756% ( 11) 00:08:25.776 28835.840 - 29037.489: 99.6914% ( 12) 00:08:25.776 29037.489 - 29239.138: 99.7975% ( 11) 00:08:25.776 29239.138 - 29440.788: 99.9132% ( 12) 00:08:25.776 29440.788 - 29642.437: 100.0000% ( 9) 00:08:25.776 00:08:25.776 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:25.776 ============================================================================== 00:08:25.776 Range in us Cumulative IO count 00:08:25.776 5217.674 - 5242.880: 0.0192% ( 2) 00:08:25.776 5242.880 - 5268.086: 0.0479% ( 3) 00:08:25.776 5268.086 - 5293.292: 0.0959% ( 5) 00:08:25.776 5293.292 - 5318.498: 0.1150% ( 2) 00:08:25.776 5318.498 - 5343.705: 0.2013% ( 9) 00:08:25.776 5343.705 - 5368.911: 0.3547% ( 16) 00:08:25.776 5368.911 - 5394.117: 0.4218% ( 7) 00:08:25.776 5394.117 - 5419.323: 0.4505% ( 3) 00:08:25.776 5419.323 - 5444.529: 0.4601% ( 1) 00:08:25.776 5444.529 - 5469.735: 0.4793% ( 2) 00:08:25.776 5469.735 - 5494.942: 0.4889% ( 1) 00:08:25.776 5494.942 - 5520.148: 0.5081% ( 2) 00:08:25.776 5520.148 - 5545.354: 0.5272% ( 2) 00:08:25.776 5545.354 - 5570.560: 0.5368% ( 1) 00:08:25.776 5570.560 - 5595.766: 0.5560% ( 2) 00:08:25.776 5595.766 - 5620.972: 0.5752% ( 2) 00:08:25.776 5620.972 - 5646.178: 0.5943% ( 2) 00:08:25.776 5646.178 - 5671.385: 0.6039% ( 1) 00:08:25.776 5671.385 - 5696.591: 0.6135% ( 1) 00:08:25.776 8620.505 - 8670.917: 0.6231% ( 1) 00:08:25.776 8670.917 - 8721.329: 0.6423% ( 2) 00:08:25.776 8721.329 - 8771.742: 0.6806% ( 4) 00:08:25.776 8771.742 - 8822.154: 0.7285% ( 5) 00:08:25.776 8822.154 - 8872.566: 0.8148% ( 9) 00:08:25.776 8872.566 - 8922.978: 0.8915% ( 8) 00:08:25.776 8922.978 - 8973.391: 0.9873% ( 10) 00:08:25.776 8973.391 - 9023.803: 1.1982% ( 22) 00:08:25.776 9023.803 - 9074.215: 1.3133% ( 12) 00:08:25.776 9074.215 - 9124.628: 1.4091% ( 10) 00:08:25.776 9124.628 - 9175.040: 1.5337% ( 13) 00:08:25.776 9175.040 - 9225.452: 1.6488% ( 12) 00:08:25.776 9225.452 - 9275.865: 1.8021% ( 16) 00:08:25.776 9275.865 - 9326.277: 1.9843% ( 19) 00:08:25.776 9326.277 - 9376.689: 2.3869% ( 42) 00:08:25.776 9376.689 - 9427.102: 2.5403% ( 16) 00:08:25.776 9427.102 - 9477.514: 2.6745% ( 14) 00:08:25.776 9477.514 - 9527.926: 2.7607% ( 9) 00:08:25.776 9527.926 - 9578.338: 2.9429% ( 19) 00:08:25.776 9578.338 - 9628.751: 3.2592% ( 33) 00:08:25.776 9628.751 - 9679.163: 3.5660% ( 32) 00:08:25.776 9679.163 - 9729.575: 3.9302% ( 38) 00:08:25.776 9729.575 - 9779.988: 4.4670% ( 56) 00:08:25.776 9779.988 - 9830.400: 5.1764% ( 74) 00:08:25.776 9830.400 - 9880.812: 5.9433% ( 80) 00:08:25.776 9880.812 - 9931.225: 6.6334% ( 72) 00:08:25.776 9931.225 - 9981.637: 7.3236% ( 72) 00:08:25.776 9981.637 - 10032.049: 8.1097% ( 82) 00:08:25.776 10032.049 - 10082.462: 8.6561% ( 57) 00:08:25.776 10082.462 - 10132.874: 9.2312% ( 60) 00:08:25.776 10132.874 - 10183.286: 9.8639% ( 66) 00:08:25.776 10183.286 - 10233.698: 10.4582% ( 62) 00:08:25.776 10233.698 - 10284.111: 10.9950% ( 56) 00:08:25.776 10284.111 - 10334.523: 11.6469% ( 68) 00:08:25.776 10334.523 - 10384.935: 12.6246% ( 102) 00:08:25.776 10384.935 - 10435.348: 13.8420% ( 127) 00:08:25.776 10435.348 - 10485.760: 14.9156% ( 112) 00:08:25.776 10485.760 - 10536.172: 16.0468% ( 118) 00:08:25.776 10536.172 - 10586.585: 17.2067% ( 121) 00:08:25.776 10586.585 - 10636.997: 18.8650% ( 173) 00:08:25.776 10636.997 - 10687.409: 20.1208% ( 131) 00:08:25.776 10687.409 - 10737.822: 21.3478% ( 128) 00:08:25.776 10737.822 - 10788.234: 22.5748% ( 128) 00:08:25.776 10788.234 - 10838.646: 23.6963% ( 117) 00:08:25.776 10838.646 - 10889.058: 24.7891% ( 114) 00:08:25.776 10889.058 - 10939.471: 26.1407% ( 141) 00:08:25.776 10939.471 - 10989.883: 27.3773% ( 129) 00:08:25.776 10989.883 - 11040.295: 28.6426% ( 132) 00:08:25.776 11040.295 - 11090.708: 29.8025% ( 121) 00:08:25.776 11090.708 - 11141.120: 30.9624% ( 121) 00:08:25.776 11141.120 - 11191.532: 32.2661% ( 136) 00:08:25.776 11191.532 - 11241.945: 33.4739% ( 126) 00:08:25.776 11241.945 - 11292.357: 34.5475% ( 112) 00:08:25.776 11292.357 - 11342.769: 35.7458% ( 125) 00:08:25.776 11342.769 - 11393.182: 37.4904% ( 182) 00:08:25.776 11393.182 - 11443.594: 38.7558% ( 132) 00:08:25.776 11443.594 - 11494.006: 39.7048% ( 99) 00:08:25.776 11494.006 - 11544.418: 40.9988% ( 135) 00:08:25.776 11544.418 - 11594.831: 42.4080% ( 147) 00:08:25.776 11594.831 - 11645.243: 43.6350% ( 128) 00:08:25.776 11645.243 - 11695.655: 44.8140% ( 123) 00:08:25.776 11695.655 - 11746.068: 46.2711% ( 152) 00:08:25.776 11746.068 - 11796.480: 47.5939% ( 138) 00:08:25.776 11796.480 - 11846.892: 49.0606% ( 153) 00:08:25.776 11846.892 - 11897.305: 50.4122% ( 141) 00:08:25.776 11897.305 - 11947.717: 52.0322% ( 169) 00:08:25.776 11947.717 - 11998.129: 53.4893% ( 152) 00:08:25.776 11998.129 - 12048.542: 54.9847% ( 156) 00:08:25.776 12048.542 - 12098.954: 56.2212% ( 129) 00:08:25.776 12098.954 - 12149.366: 57.2373% ( 106) 00:08:25.776 12149.366 - 12199.778: 58.1192% ( 92) 00:08:25.776 12199.778 - 12250.191: 59.0203% ( 94) 00:08:25.776 12250.191 - 12300.603: 60.0077% ( 103) 00:08:25.776 12300.603 - 12351.015: 61.0717% ( 111) 00:08:25.776 12351.015 - 12401.428: 62.0686% ( 104) 00:08:25.776 12401.428 - 12451.840: 62.8259% ( 79) 00:08:25.776 12451.840 - 12502.252: 63.6791% ( 89) 00:08:25.776 12502.252 - 12552.665: 64.9732% ( 135) 00:08:25.776 12552.665 - 12603.077: 66.2289% ( 131) 00:08:25.776 12603.077 - 12653.489: 67.6093% ( 144) 00:08:25.776 12653.489 - 12703.902: 68.6637% ( 110) 00:08:25.776 12703.902 - 12754.314: 69.7853% ( 117) 00:08:25.776 12754.314 - 12804.726: 70.6480% ( 90) 00:08:25.776 12804.726 - 12855.138: 71.5491% ( 94) 00:08:25.776 12855.138 - 12905.551: 72.2776% ( 76) 00:08:25.776 12905.551 - 13006.375: 73.4950% ( 127) 00:08:25.776 13006.375 - 13107.200: 74.6166% ( 117) 00:08:25.776 13107.200 - 13208.025: 75.6614% ( 109) 00:08:25.776 13208.025 - 13308.849: 76.4091% ( 78) 00:08:25.776 13308.849 - 13409.674: 77.1856% ( 81) 00:08:25.777 13409.674 - 13510.498: 77.9333% ( 78) 00:08:25.777 13510.498 - 13611.323: 78.8056% ( 91) 00:08:25.777 13611.323 - 13712.148: 79.4766% ( 70) 00:08:25.777 13712.148 - 13812.972: 80.3585% ( 92) 00:08:25.777 13812.972 - 13913.797: 81.3746% ( 106) 00:08:25.777 13913.797 - 14014.622: 82.4482% ( 112) 00:08:25.777 14014.622 - 14115.446: 83.1863% ( 77) 00:08:25.777 14115.446 - 14216.271: 83.8957% ( 74) 00:08:25.777 14216.271 - 14317.095: 84.5284% ( 66) 00:08:25.777 14317.095 - 14417.920: 85.5445% ( 106) 00:08:25.777 14417.920 - 14518.745: 86.2826% ( 77) 00:08:25.777 14518.745 - 14619.569: 87.2604% ( 102) 00:08:25.777 14619.569 - 14720.394: 88.0752% ( 85) 00:08:25.777 14720.394 - 14821.218: 88.8995% ( 86) 00:08:25.777 14821.218 - 14922.043: 89.6856% ( 82) 00:08:25.777 14922.043 - 15022.868: 90.4525% ( 80) 00:08:25.777 15022.868 - 15123.692: 91.0947% ( 67) 00:08:25.777 15123.692 - 15224.517: 91.8041% ( 74) 00:08:25.777 15224.517 - 15325.342: 92.2258% ( 44) 00:08:25.777 15325.342 - 15426.166: 92.7243% ( 52) 00:08:25.777 15426.166 - 15526.991: 93.1077% ( 40) 00:08:25.777 15526.991 - 15627.815: 93.4720% ( 38) 00:08:25.777 15627.815 - 15728.640: 93.7308% ( 27) 00:08:25.777 15728.640 - 15829.465: 93.9321% ( 21) 00:08:25.777 15829.465 - 15930.289: 94.2964% ( 38) 00:08:25.777 15930.289 - 16031.114: 94.6990% ( 42) 00:08:25.777 16031.114 - 16131.938: 95.2742% ( 60) 00:08:25.777 16131.938 - 16232.763: 95.6576% ( 40) 00:08:25.777 16232.763 - 16333.588: 96.0410% ( 40) 00:08:25.777 16333.588 - 16434.412: 96.2711% ( 24) 00:08:25.777 16434.412 - 16535.237: 96.4724% ( 21) 00:08:25.777 16535.237 - 16636.062: 96.6929% ( 23) 00:08:25.777 16636.062 - 16736.886: 96.9421% ( 26) 00:08:25.777 16736.886 - 16837.711: 97.3447% ( 42) 00:08:25.777 16837.711 - 16938.535: 97.7186% ( 39) 00:08:25.777 16938.535 - 17039.360: 97.9390% ( 23) 00:08:25.777 17039.360 - 17140.185: 98.0349% ( 10) 00:08:25.777 17140.185 - 17241.009: 98.0924% ( 6) 00:08:25.777 17241.009 - 17341.834: 98.1020% ( 1) 00:08:25.777 17442.658 - 17543.483: 98.1308% ( 3) 00:08:25.777 17543.483 - 17644.308: 98.1595% ( 3) 00:08:25.777 17644.308 - 17745.132: 98.2458% ( 9) 00:08:25.777 17745.132 - 17845.957: 98.3033% ( 6) 00:08:25.777 17845.957 - 17946.782: 98.6005% ( 31) 00:08:25.777 17946.782 - 18047.606: 98.6771% ( 8) 00:08:25.777 18047.606 - 18148.431: 98.8018% ( 13) 00:08:25.777 18148.431 - 18249.255: 98.9072% ( 11) 00:08:25.777 18249.255 - 18350.080: 98.9839% ( 8) 00:08:25.777 18350.080 - 18450.905: 99.1085% ( 13) 00:08:25.777 18450.905 - 18551.729: 99.2331% ( 13) 00:08:25.777 18551.729 - 18652.554: 99.2906% ( 6) 00:08:25.777 18652.554 - 18753.378: 99.3386% ( 5) 00:08:25.777 18753.378 - 18854.203: 99.3865% ( 5) 00:08:25.777 22584.714 - 22685.538: 99.4153% ( 3) 00:08:25.777 22685.538 - 22786.363: 99.4632% ( 5) 00:08:25.777 22786.363 - 22887.188: 99.5207% ( 6) 00:08:25.777 22887.188 - 22988.012: 99.5782% ( 6) 00:08:25.777 22988.012 - 23088.837: 99.6357% ( 6) 00:08:25.777 23088.837 - 23189.662: 99.6837% ( 5) 00:08:25.777 23189.662 - 23290.486: 99.7412% ( 6) 00:08:25.777 23290.486 - 23391.311: 99.7987% ( 6) 00:08:25.777 23391.311 - 23492.135: 99.8466% ( 5) 00:08:25.777 23492.135 - 23592.960: 99.8946% ( 5) 00:08:25.777 23592.960 - 23693.785: 99.9521% ( 6) 00:08:25.777 23693.785 - 23794.609: 100.0000% ( 5) 00:08:25.777 00:08:25.777 18:19:14 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:25.777 00:08:25.777 real 0m2.501s 00:08:25.777 user 0m2.159s 00:08:25.777 sys 0m0.228s 00:08:25.777 18:19:14 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.777 18:19:14 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:25.777 ************************************ 00:08:25.777 END TEST nvme_perf 00:08:25.777 ************************************ 00:08:25.777 18:19:14 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:25.777 18:19:14 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:25.777 18:19:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.777 18:19:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.777 ************************************ 00:08:25.777 START TEST nvme_hello_world 00:08:25.777 ************************************ 00:08:25.777 18:19:14 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:26.038 Initializing NVMe Controllers 00:08:26.038 Attached to 0000:00:10.0 00:08:26.038 Namespace ID: 1 size: 6GB 00:08:26.038 Attached to 0000:00:11.0 00:08:26.038 Namespace ID: 1 size: 5GB 00:08:26.038 Attached to 0000:00:13.0 00:08:26.038 Namespace ID: 1 size: 1GB 00:08:26.038 Attached to 0000:00:12.0 00:08:26.038 Namespace ID: 1 size: 4GB 00:08:26.038 Namespace ID: 2 size: 4GB 00:08:26.038 Namespace ID: 3 size: 4GB 00:08:26.038 Initialization complete. 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 INFO: using host memory buffer for IO 00:08:26.038 Hello world! 00:08:26.038 00:08:26.038 real 0m0.230s 00:08:26.038 user 0m0.068s 00:08:26.038 sys 0m0.118s 00:08:26.038 18:19:14 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.038 18:19:14 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:26.038 ************************************ 00:08:26.038 END TEST nvme_hello_world 00:08:26.038 ************************************ 00:08:26.038 18:19:14 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:26.038 18:19:14 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.038 18:19:14 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.038 18:19:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.038 ************************************ 00:08:26.038 START TEST nvme_sgl 00:08:26.038 ************************************ 00:08:26.038 18:19:14 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:26.298 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:26.298 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:26.298 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:26.298 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:26.298 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:26.298 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:26.298 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:26.298 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:26.298 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:26.298 NVMe Readv/Writev Request test 00:08:26.298 Attached to 0000:00:10.0 00:08:26.298 Attached to 0000:00:11.0 00:08:26.298 Attached to 0000:00:13.0 00:08:26.298 Attached to 0000:00:12.0 00:08:26.298 0000:00:10.0: build_io_request_2 test passed 00:08:26.298 0000:00:10.0: build_io_request_4 test passed 00:08:26.298 0000:00:10.0: build_io_request_5 test passed 00:08:26.298 0000:00:10.0: build_io_request_6 test passed 00:08:26.298 0000:00:10.0: build_io_request_7 test passed 00:08:26.298 0000:00:10.0: build_io_request_10 test passed 00:08:26.298 0000:00:11.0: build_io_request_2 test passed 00:08:26.298 0000:00:11.0: build_io_request_4 test passed 00:08:26.298 0000:00:11.0: build_io_request_5 test passed 00:08:26.298 0000:00:11.0: build_io_request_6 test passed 00:08:26.298 0000:00:11.0: build_io_request_7 test passed 00:08:26.298 0000:00:11.0: build_io_request_10 test passed 00:08:26.298 Cleaning up... 00:08:26.298 00:08:26.298 real 0m0.285s 00:08:26.298 user 0m0.135s 00:08:26.298 sys 0m0.104s 00:08:26.298 18:19:15 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.298 18:19:15 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:26.298 ************************************ 00:08:26.298 END TEST nvme_sgl 00:08:26.298 ************************************ 00:08:26.298 18:19:15 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:26.298 18:19:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.298 18:19:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.298 18:19:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.298 ************************************ 00:08:26.298 START TEST nvme_e2edp 00:08:26.298 ************************************ 00:08:26.298 18:19:15 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:26.557 NVMe Write/Read with End-to-End data protection test 00:08:26.557 Attached to 0000:00:10.0 00:08:26.557 Attached to 0000:00:11.0 00:08:26.557 Attached to 0000:00:13.0 00:08:26.557 Attached to 0000:00:12.0 00:08:26.557 Cleaning up... 00:08:26.557 ************************************ 00:08:26.557 END TEST nvme_e2edp 00:08:26.557 ************************************ 00:08:26.557 00:08:26.557 real 0m0.177s 00:08:26.557 user 0m0.052s 00:08:26.557 sys 0m0.081s 00:08:26.557 18:19:15 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.557 18:19:15 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:26.557 18:19:15 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:26.557 18:19:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.557 18:19:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.557 18:19:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.557 ************************************ 00:08:26.557 START TEST nvme_reserve 00:08:26.557 ************************************ 00:08:26.558 18:19:15 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:26.816 ===================================================== 00:08:26.816 NVMe Controller at PCI bus 0, device 16, function 0 00:08:26.816 ===================================================== 00:08:26.816 Reservations: Not Supported 00:08:26.816 ===================================================== 00:08:26.816 NVMe Controller at PCI bus 0, device 17, function 0 00:08:26.816 ===================================================== 00:08:26.816 Reservations: Not Supported 00:08:26.816 ===================================================== 00:08:26.816 NVMe Controller at PCI bus 0, device 19, function 0 00:08:26.816 ===================================================== 00:08:26.816 Reservations: Not Supported 00:08:26.816 ===================================================== 00:08:26.816 NVMe Controller at PCI bus 0, device 18, function 0 00:08:26.816 ===================================================== 00:08:26.816 Reservations: Not Supported 00:08:26.816 Reservation test passed 00:08:26.816 00:08:26.816 real 0m0.187s 00:08:26.816 user 0m0.065s 00:08:26.816 sys 0m0.081s 00:08:26.816 18:19:15 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.816 18:19:15 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:26.816 ************************************ 00:08:26.816 END TEST nvme_reserve 00:08:26.816 ************************************ 00:08:26.816 18:19:15 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:26.816 18:19:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.816 18:19:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.816 18:19:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:26.816 ************************************ 00:08:26.816 START TEST nvme_err_injection 00:08:26.816 ************************************ 00:08:26.816 18:19:15 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:27.151 NVMe Error Injection test 00:08:27.151 Attached to 0000:00:10.0 00:08:27.151 Attached to 0000:00:11.0 00:08:27.151 Attached to 0000:00:13.0 00:08:27.151 Attached to 0000:00:12.0 00:08:27.151 0000:00:10.0: get features failed as expected 00:08:27.151 0000:00:11.0: get features failed as expected 00:08:27.151 0000:00:13.0: get features failed as expected 00:08:27.151 0000:00:12.0: get features failed as expected 00:08:27.151 0000:00:10.0: get features successfully as expected 00:08:27.151 0000:00:11.0: get features successfully as expected 00:08:27.151 0000:00:13.0: get features successfully as expected 00:08:27.151 0000:00:12.0: get features successfully as expected 00:08:27.151 0000:00:12.0: read failed as expected 00:08:27.151 0000:00:10.0: read failed as expected 00:08:27.151 0000:00:11.0: read failed as expected 00:08:27.151 0000:00:13.0: read failed as expected 00:08:27.151 0000:00:13.0: read successfully as expected 00:08:27.151 0000:00:10.0: read successfully as expected 00:08:27.151 0000:00:11.0: read successfully as expected 00:08:27.151 0000:00:12.0: read successfully as expected 00:08:27.151 Cleaning up... 00:08:27.151 ************************************ 00:08:27.151 END TEST nvme_err_injection 00:08:27.151 ************************************ 00:08:27.151 00:08:27.151 real 0m0.179s 00:08:27.151 user 0m0.065s 00:08:27.151 sys 0m0.076s 00:08:27.151 18:19:15 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.151 18:19:15 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:27.151 18:19:15 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:27.151 18:19:15 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:27.151 18:19:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.151 18:19:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:27.151 ************************************ 00:08:27.151 START TEST nvme_overhead 00:08:27.152 ************************************ 00:08:27.152 18:19:15 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:28.086 Initializing NVMe Controllers 00:08:28.086 Attached to 0000:00:10.0 00:08:28.086 Attached to 0000:00:11.0 00:08:28.086 Attached to 0000:00:13.0 00:08:28.086 Attached to 0000:00:12.0 00:08:28.086 Initialization complete. Launching workers. 00:08:28.086 submit (in ns) avg, min, max = 12775.7, 10276.9, 166670.8 00:08:28.086 complete (in ns) avg, min, max = 8325.3, 7386.9, 152708.5 00:08:28.086 00:08:28.086 Submit histogram 00:08:28.086 ================ 00:08:28.086 Range in us Cumulative Count 00:08:28.086 10.240 - 10.289: 0.0288% ( 1) 00:08:28.086 10.732 - 10.782: 0.0577% ( 1) 00:08:28.086 10.782 - 10.831: 0.2306% ( 6) 00:08:28.086 10.831 - 10.880: 0.4036% ( 6) 00:08:28.087 10.880 - 10.929: 0.6918% ( 10) 00:08:28.087 10.929 - 10.978: 0.8936% ( 7) 00:08:28.087 10.978 - 11.028: 1.2684% ( 13) 00:08:28.087 11.028 - 11.077: 1.7008% ( 15) 00:08:28.087 11.077 - 11.126: 2.9115% ( 42) 00:08:28.087 11.126 - 11.175: 6.2842% ( 117) 00:08:28.087 11.175 - 11.225: 12.0784% ( 201) 00:08:28.087 11.225 - 11.274: 21.5912% ( 330) 00:08:28.087 11.274 - 11.323: 34.5344% ( 449) 00:08:28.087 11.323 - 11.372: 46.5552% ( 417) 00:08:28.087 11.372 - 11.422: 55.0014% ( 293) 00:08:28.087 11.422 - 11.471: 60.3344% ( 185) 00:08:28.087 11.471 - 11.520: 63.9089% ( 124) 00:08:28.087 11.520 - 11.569: 66.6474% ( 95) 00:08:28.087 11.569 - 11.618: 68.7518% ( 73) 00:08:28.087 11.618 - 11.668: 70.5391% ( 62) 00:08:28.087 11.668 - 11.717: 71.8651% ( 46) 00:08:28.087 11.717 - 11.766: 72.8452% ( 34) 00:08:28.087 11.766 - 11.815: 73.5082% ( 23) 00:08:28.087 11.815 - 11.865: 74.0559% ( 19) 00:08:28.087 11.865 - 11.914: 74.4595% ( 14) 00:08:28.087 11.914 - 11.963: 74.8919% ( 15) 00:08:28.087 11.963 - 12.012: 75.2090% ( 11) 00:08:28.087 12.012 - 12.062: 75.2666% ( 2) 00:08:28.087 12.062 - 12.111: 75.4108% ( 5) 00:08:28.087 12.111 - 12.160: 75.4973% ( 3) 00:08:28.087 12.160 - 12.209: 75.5837% ( 3) 00:08:28.087 12.209 - 12.258: 75.6990% ( 4) 00:08:28.087 12.258 - 12.308: 75.7855% ( 3) 00:08:28.087 12.308 - 12.357: 75.9008% ( 4) 00:08:28.087 12.357 - 12.406: 75.9585% ( 2) 00:08:28.087 12.406 - 12.455: 76.0450% ( 3) 00:08:28.087 12.455 - 12.505: 76.1314% ( 3) 00:08:28.087 12.505 - 12.554: 76.1603% ( 1) 00:08:28.087 12.554 - 12.603: 76.3621% ( 7) 00:08:28.087 12.603 - 12.702: 76.5062% ( 5) 00:08:28.087 12.702 - 12.800: 76.5927% ( 3) 00:08:28.087 12.800 - 12.898: 76.7368% ( 5) 00:08:28.087 12.898 - 12.997: 76.8521% ( 4) 00:08:28.087 12.997 - 13.095: 76.9386% ( 3) 00:08:28.087 13.095 - 13.194: 76.9674% ( 1) 00:08:28.087 13.194 - 13.292: 77.0539% ( 3) 00:08:28.087 13.292 - 13.391: 77.0827% ( 1) 00:08:28.087 13.391 - 13.489: 77.1116% ( 1) 00:08:28.087 13.489 - 13.588: 77.1980% ( 3) 00:08:28.087 13.588 - 13.686: 77.3422% ( 5) 00:08:28.087 13.686 - 13.785: 77.4575% ( 4) 00:08:28.087 13.785 - 13.883: 77.5440% ( 3) 00:08:28.087 13.883 - 13.982: 77.6881% ( 5) 00:08:28.087 13.982 - 14.080: 77.9475% ( 9) 00:08:28.087 14.080 - 14.178: 78.2358% ( 10) 00:08:28.087 14.178 - 14.277: 78.4376% ( 7) 00:08:28.087 14.277 - 14.375: 78.7547% ( 11) 00:08:28.087 14.375 - 14.474: 79.1583% ( 14) 00:08:28.087 14.474 - 14.572: 79.6483% ( 17) 00:08:28.087 14.572 - 14.671: 80.0519% ( 14) 00:08:28.087 14.671 - 14.769: 80.5996% ( 19) 00:08:28.087 14.769 - 14.868: 81.1185% ( 18) 00:08:28.087 14.868 - 14.966: 81.6374% ( 18) 00:08:28.087 14.966 - 15.065: 82.0121% ( 13) 00:08:28.087 15.065 - 15.163: 82.1851% ( 6) 00:08:28.087 15.163 - 15.262: 82.4733% ( 10) 00:08:28.087 15.262 - 15.360: 82.6463% ( 6) 00:08:28.087 15.360 - 15.458: 82.9922% ( 12) 00:08:28.087 15.458 - 15.557: 83.1364% ( 5) 00:08:28.087 15.557 - 15.655: 83.2517% ( 4) 00:08:28.087 15.655 - 15.754: 83.3670% ( 4) 00:08:28.087 15.754 - 15.852: 83.5111% ( 5) 00:08:28.087 15.852 - 15.951: 83.8570% ( 12) 00:08:28.087 15.951 - 16.049: 84.1741% ( 11) 00:08:28.087 16.049 - 16.148: 84.3759% ( 7) 00:08:28.087 16.148 - 16.246: 84.7795% ( 14) 00:08:28.087 16.246 - 16.345: 85.2407% ( 16) 00:08:28.087 16.345 - 16.443: 85.7019% ( 16) 00:08:28.087 16.443 - 16.542: 86.2785% ( 20) 00:08:28.087 16.542 - 16.640: 86.7973% ( 18) 00:08:28.087 16.640 - 16.738: 87.2009% ( 14) 00:08:28.087 16.738 - 16.837: 88.0081% ( 28) 00:08:28.087 16.837 - 16.935: 89.0458% ( 36) 00:08:28.087 16.935 - 17.034: 89.9106% ( 30) 00:08:28.087 17.034 - 17.132: 90.9772% ( 37) 00:08:28.087 17.132 - 17.231: 91.5826% ( 21) 00:08:28.087 17.231 - 17.329: 92.4186% ( 29) 00:08:28.087 17.329 - 17.428: 93.5428% ( 39) 00:08:28.087 17.428 - 17.526: 94.1482% ( 21) 00:08:28.087 17.526 - 17.625: 94.9265% ( 27) 00:08:28.087 17.625 - 17.723: 95.3877% ( 16) 00:08:28.087 17.723 - 17.822: 95.8778% ( 17) 00:08:28.087 17.822 - 17.920: 96.3390% ( 16) 00:08:28.087 17.920 - 18.018: 96.6273% ( 10) 00:08:28.087 18.018 - 18.117: 96.8579% ( 8) 00:08:28.087 18.117 - 18.215: 97.0020% ( 5) 00:08:28.087 18.215 - 18.314: 97.2615% ( 9) 00:08:28.087 18.314 - 18.412: 97.4921% ( 8) 00:08:28.087 18.412 - 18.511: 97.6939% ( 7) 00:08:28.087 18.511 - 18.609: 97.8380% ( 5) 00:08:28.087 18.609 - 18.708: 98.1263% ( 10) 00:08:28.087 18.708 - 18.806: 98.2416% ( 4) 00:08:28.087 18.806 - 18.905: 98.3569% ( 4) 00:08:28.087 18.905 - 19.003: 98.6163% ( 9) 00:08:28.087 19.003 - 19.102: 98.7604% ( 5) 00:08:28.087 19.200 - 19.298: 98.8181% ( 2) 00:08:28.087 19.298 - 19.397: 98.8758% ( 2) 00:08:28.087 19.397 - 19.495: 98.9046% ( 1) 00:08:28.087 19.594 - 19.692: 98.9911% ( 3) 00:08:28.087 19.889 - 19.988: 99.0487% ( 2) 00:08:28.087 19.988 - 20.086: 99.1064% ( 2) 00:08:28.087 20.382 - 20.480: 99.1352% ( 1) 00:08:28.087 20.578 - 20.677: 99.1640% ( 1) 00:08:28.087 21.465 - 21.563: 99.2217% ( 2) 00:08:28.087 21.563 - 21.662: 99.2505% ( 1) 00:08:28.087 22.252 - 22.351: 99.2793% ( 1) 00:08:28.087 22.449 - 22.548: 99.3082% ( 1) 00:08:28.087 22.646 - 22.745: 99.3658% ( 2) 00:08:28.087 22.942 - 23.040: 99.3946% ( 1) 00:08:28.087 23.335 - 23.434: 99.4235% ( 1) 00:08:28.087 23.434 - 23.532: 99.4811% ( 2) 00:08:28.087 23.532 - 23.631: 99.5099% ( 1) 00:08:28.087 23.631 - 23.729: 99.5388% ( 1) 00:08:28.087 23.729 - 23.828: 99.5676% ( 1) 00:08:28.087 24.320 - 24.418: 99.5964% ( 1) 00:08:28.087 28.160 - 28.357: 99.6253% ( 1) 00:08:28.087 30.326 - 30.523: 99.6541% ( 1) 00:08:28.087 35.052 - 35.249: 99.6829% ( 1) 00:08:28.087 39.582 - 39.778: 99.7117% ( 1) 00:08:28.087 55.138 - 55.532: 99.7406% ( 1) 00:08:28.087 55.532 - 55.926: 99.7694% ( 1) 00:08:28.087 55.926 - 56.320: 99.8559% ( 3) 00:08:28.087 57.108 - 57.502: 99.8847% ( 1) 00:08:28.087 57.502 - 57.895: 99.9135% ( 1) 00:08:28.087 57.895 - 58.289: 99.9423% ( 1) 00:08:28.087 94.523 - 94.917: 99.9712% ( 1) 00:08:28.087 166.203 - 166.991: 100.0000% ( 1) 00:08:28.087 00:08:28.087 Complete histogram 00:08:28.087 ================== 00:08:28.087 Range in us Cumulative Count 00:08:28.087 7.385 - 7.434: 0.0577% ( 2) 00:08:28.087 7.434 - 7.483: 0.4036% ( 12) 00:08:28.087 7.483 - 7.532: 0.6630% ( 9) 00:08:28.087 7.532 - 7.582: 1.2684% ( 21) 00:08:28.087 7.582 - 7.631: 1.6431% ( 13) 00:08:28.087 7.631 - 7.680: 2.1044% ( 16) 00:08:28.087 7.680 - 7.729: 2.4503% ( 12) 00:08:28.087 7.729 - 7.778: 2.8250% ( 13) 00:08:28.087 7.778 - 7.828: 3.3727% ( 19) 00:08:28.087 7.828 - 7.877: 6.0536% ( 93) 00:08:28.087 7.877 - 7.926: 12.8856% ( 237) 00:08:28.087 7.926 - 7.975: 24.0992% ( 389) 00:08:28.087 7.975 - 8.025: 36.7541% ( 439) 00:08:28.087 8.025 - 8.074: 51.1387% ( 499) 00:08:28.087 8.074 - 8.123: 63.7648% ( 438) 00:08:28.087 8.123 - 8.172: 73.9694% ( 354) 00:08:28.087 8.172 - 8.222: 81.2914% ( 254) 00:08:28.087 8.222 - 8.271: 86.0767% ( 166) 00:08:28.087 8.271 - 8.320: 89.7953% ( 129) 00:08:28.087 8.320 - 8.369: 92.4186% ( 91) 00:08:28.087 8.369 - 8.418: 94.0329% ( 56) 00:08:28.087 8.418 - 8.468: 94.8977% ( 30) 00:08:28.087 8.468 - 8.517: 95.2148% ( 11) 00:08:28.087 8.517 - 8.566: 95.3877% ( 6) 00:08:28.087 8.566 - 8.615: 95.6472% ( 9) 00:08:28.087 8.615 - 8.665: 95.7913% ( 5) 00:08:28.087 8.714 - 8.763: 95.9066% ( 4) 00:08:28.346 8.763 - 8.812: 95.9931% ( 3) 00:08:28.346 8.812 - 8.862: 96.0219% ( 1) 00:08:28.346 8.862 - 8.911: 96.1084% ( 3) 00:08:28.346 8.911 - 8.960: 96.1660% ( 2) 00:08:28.346 8.960 - 9.009: 96.2813% ( 4) 00:08:28.346 9.009 - 9.058: 96.3678% ( 3) 00:08:28.346 9.058 - 9.108: 96.5984% ( 8) 00:08:28.346 9.108 - 9.157: 96.7714% ( 6) 00:08:28.346 9.157 - 9.206: 96.8867% ( 4) 00:08:28.346 9.206 - 9.255: 96.9732% ( 3) 00:08:28.346 9.255 - 9.305: 97.0308% ( 2) 00:08:28.346 9.305 - 9.354: 97.0885% ( 2) 00:08:28.346 9.354 - 9.403: 97.2615% ( 6) 00:08:28.346 9.403 - 9.452: 97.2903% ( 1) 00:08:28.346 9.797 - 9.846: 97.3479% ( 2) 00:08:28.346 9.994 - 10.043: 97.3768% ( 1) 00:08:28.346 10.142 - 10.191: 97.4056% ( 1) 00:08:28.346 10.240 - 10.289: 97.4632% ( 2) 00:08:28.346 10.289 - 10.338: 97.4921% ( 1) 00:08:28.346 10.338 - 10.388: 97.5209% ( 1) 00:08:28.346 10.683 - 10.732: 97.5497% ( 1) 00:08:28.346 10.880 - 10.929: 97.5786% ( 1) 00:08:28.346 11.028 - 11.077: 97.6074% ( 1) 00:08:28.346 11.126 - 11.175: 97.6362% ( 1) 00:08:28.346 11.225 - 11.274: 97.6650% ( 1) 00:08:28.346 11.274 - 11.323: 97.6939% ( 1) 00:08:28.346 11.422 - 11.471: 97.7515% ( 2) 00:08:28.346 11.520 - 11.569: 97.7803% ( 1) 00:08:28.346 11.569 - 11.618: 97.8092% ( 1) 00:08:28.346 11.717 - 11.766: 97.8956% ( 3) 00:08:28.346 11.815 - 11.865: 97.9533% ( 2) 00:08:28.346 11.865 - 11.914: 97.9821% ( 1) 00:08:28.346 12.012 - 12.062: 98.0398% ( 2) 00:08:28.346 12.111 - 12.160: 98.0686% ( 1) 00:08:28.346 12.160 - 12.209: 98.0974% ( 1) 00:08:28.346 12.209 - 12.258: 98.1263% ( 1) 00:08:28.346 12.357 - 12.406: 98.1551% ( 1) 00:08:28.346 13.095 - 13.194: 98.1839% ( 1) 00:08:28.346 13.292 - 13.391: 98.2416% ( 2) 00:08:28.346 13.686 - 13.785: 98.2704% ( 1) 00:08:28.346 13.785 - 13.883: 98.3280% ( 2) 00:08:28.346 13.982 - 14.080: 98.3569% ( 1) 00:08:28.346 14.080 - 14.178: 98.4722% ( 4) 00:08:28.346 14.178 - 14.277: 98.5875% ( 4) 00:08:28.346 14.277 - 14.375: 98.6740% ( 3) 00:08:28.346 14.375 - 14.474: 98.8469% ( 6) 00:08:28.346 14.474 - 14.572: 98.9046% ( 2) 00:08:28.346 14.572 - 14.671: 98.9622% ( 2) 00:08:28.346 14.671 - 14.769: 99.0487% ( 3) 00:08:28.346 14.769 - 14.868: 99.1352% ( 3) 00:08:28.346 14.868 - 14.966: 99.1640% ( 1) 00:08:28.346 14.966 - 15.065: 99.2505% ( 3) 00:08:28.346 15.163 - 15.262: 99.2793% ( 1) 00:08:28.346 15.262 - 15.360: 99.3370% ( 2) 00:08:28.346 15.360 - 15.458: 99.3658% ( 1) 00:08:28.346 15.754 - 15.852: 99.3946% ( 1) 00:08:28.346 16.640 - 16.738: 99.4235% ( 1) 00:08:28.346 16.738 - 16.837: 99.4523% ( 1) 00:08:28.346 16.837 - 16.935: 99.4811% ( 1) 00:08:28.346 17.231 - 17.329: 99.5099% ( 1) 00:08:28.346 17.625 - 17.723: 99.5388% ( 1) 00:08:28.346 17.822 - 17.920: 99.5676% ( 1) 00:08:28.346 18.018 - 18.117: 99.5964% ( 1) 00:08:28.346 18.215 - 18.314: 99.6253% ( 1) 00:08:28.346 18.708 - 18.806: 99.6541% ( 1) 00:08:28.346 19.200 - 19.298: 99.6829% ( 1) 00:08:28.346 19.988 - 20.086: 99.7117% ( 1) 00:08:28.346 21.957 - 22.055: 99.7406% ( 1) 00:08:28.346 23.926 - 24.025: 99.7694% ( 1) 00:08:28.346 26.782 - 26.978: 99.7982% ( 1) 00:08:28.346 27.569 - 27.766: 99.8270% ( 1) 00:08:28.346 28.554 - 28.751: 99.8559% ( 1) 00:08:28.346 28.948 - 29.145: 99.8847% ( 1) 00:08:28.346 32.689 - 32.886: 99.9135% ( 1) 00:08:28.346 33.280 - 33.477: 99.9423% ( 1) 00:08:28.346 57.108 - 57.502: 99.9712% ( 1) 00:08:28.346 152.025 - 152.812: 100.0000% ( 1) 00:08:28.346 00:08:28.346 ************************************ 00:08:28.346 END TEST nvme_overhead 00:08:28.346 ************************************ 00:08:28.346 00:08:28.346 real 0m1.193s 00:08:28.346 user 0m1.058s 00:08:28.346 sys 0m0.085s 00:08:28.346 18:19:16 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.346 18:19:16 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:28.346 18:19:16 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:28.346 18:19:16 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:28.346 18:19:16 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.346 18:19:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:28.346 ************************************ 00:08:28.346 START TEST nvme_arbitration 00:08:28.346 ************************************ 00:08:28.346 18:19:17 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:31.647 Initializing NVMe Controllers 00:08:31.648 Attached to 0000:00:10.0 00:08:31.648 Attached to 0000:00:11.0 00:08:31.648 Attached to 0000:00:13.0 00:08:31.648 Attached to 0000:00:12.0 00:08:31.648 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:31.648 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:31.648 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:31.648 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:31.648 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:31.648 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:31.648 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:31.648 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:31.648 Initialization complete. Launching workers. 00:08:31.648 Starting thread on core 1 with urgent priority queue 00:08:31.648 Starting thread on core 2 with urgent priority queue 00:08:31.648 Starting thread on core 3 with urgent priority queue 00:08:31.648 Starting thread on core 0 with urgent priority queue 00:08:31.648 QEMU NVMe Ctrl (12340 ) core 0: 5824.00 IO/s 17.17 secs/100000 ios 00:08:31.648 QEMU NVMe Ctrl (12342 ) core 0: 5824.00 IO/s 17.17 secs/100000 ios 00:08:31.648 QEMU NVMe Ctrl (12341 ) core 1: 5568.00 IO/s 17.96 secs/100000 ios 00:08:31.648 QEMU NVMe Ctrl (12342 ) core 1: 5568.00 IO/s 17.96 secs/100000 ios 00:08:31.648 QEMU NVMe Ctrl (12343 ) core 2: 5504.00 IO/s 18.17 secs/100000 ios 00:08:31.648 QEMU NVMe Ctrl (12342 ) core 3: 5056.00 IO/s 19.78 secs/100000 ios 00:08:31.648 ======================================================== 00:08:31.648 00:08:31.648 00:08:31.648 real 0m3.230s 00:08:31.648 user 0m9.053s 00:08:31.648 sys 0m0.110s 00:08:31.648 18:19:20 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.648 ************************************ 00:08:31.648 END TEST nvme_arbitration 00:08:31.648 ************************************ 00:08:31.648 18:19:20 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:31.648 18:19:20 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:31.648 18:19:20 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:31.648 18:19:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.648 18:19:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.648 ************************************ 00:08:31.648 START TEST nvme_single_aen 00:08:31.648 ************************************ 00:08:31.648 18:19:20 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:31.648 Asynchronous Event Request test 00:08:31.648 Attached to 0000:00:10.0 00:08:31.648 Attached to 0000:00:11.0 00:08:31.648 Attached to 0000:00:13.0 00:08:31.648 Attached to 0000:00:12.0 00:08:31.648 Reset controller to setup AER completions for this process 00:08:31.648 Registering asynchronous event callbacks... 00:08:31.648 Getting orig temperature thresholds of all controllers 00:08:31.648 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.648 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.648 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.648 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:31.648 Setting all controllers temperature threshold low to trigger AER 00:08:31.648 Waiting for all controllers temperature threshold to be set lower 00:08:31.648 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.648 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:31.648 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.648 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:31.648 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.648 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:31.648 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:31.648 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:31.648 Waiting for all controllers to trigger AER and reset threshold 00:08:31.648 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.648 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.648 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.648 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:31.648 Cleaning up... 00:08:31.910 ************************************ 00:08:31.910 END TEST nvme_single_aen 00:08:31.910 ************************************ 00:08:31.910 00:08:31.910 real 0m0.205s 00:08:31.910 user 0m0.074s 00:08:31.910 sys 0m0.097s 00:08:31.910 18:19:20 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.910 18:19:20 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:31.910 18:19:20 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:31.910 18:19:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.910 18:19:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.910 18:19:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.910 ************************************ 00:08:31.910 START TEST nvme_doorbell_aers 00:08:31.910 ************************************ 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:31.910 18:19:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:32.170 [2024-10-08 18:19:20.846631] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:08:42.248 Executing: test_write_invalid_db 00:08:42.248 Waiting for AER completion... 00:08:42.248 Failure: test_write_invalid_db 00:08:42.248 00:08:42.248 Executing: test_invalid_db_write_overflow_sq 00:08:42.248 Waiting for AER completion... 00:08:42.248 Failure: test_invalid_db_write_overflow_sq 00:08:42.248 00:08:42.248 Executing: test_invalid_db_write_overflow_cq 00:08:42.248 Waiting for AER completion... 00:08:42.248 Failure: test_invalid_db_write_overflow_cq 00:08:42.248 00:08:42.248 18:19:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:42.248 18:19:30 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:42.248 [2024-10-08 18:19:30.885295] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:08:52.223 Executing: test_write_invalid_db 00:08:52.223 Waiting for AER completion... 00:08:52.223 Failure: test_write_invalid_db 00:08:52.223 00:08:52.223 Executing: test_invalid_db_write_overflow_sq 00:08:52.223 Waiting for AER completion... 00:08:52.223 Failure: test_invalid_db_write_overflow_sq 00:08:52.223 00:08:52.223 Executing: test_invalid_db_write_overflow_cq 00:08:52.223 Waiting for AER completion... 00:08:52.223 Failure: test_invalid_db_write_overflow_cq 00:08:52.223 00:08:52.223 18:19:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:52.223 18:19:40 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:52.223 [2024-10-08 18:19:40.899379] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:02.186 Executing: test_write_invalid_db 00:09:02.186 Waiting for AER completion... 00:09:02.186 Failure: test_write_invalid_db 00:09:02.186 00:09:02.186 Executing: test_invalid_db_write_overflow_sq 00:09:02.186 Waiting for AER completion... 00:09:02.186 Failure: test_invalid_db_write_overflow_sq 00:09:02.186 00:09:02.186 Executing: test_invalid_db_write_overflow_cq 00:09:02.186 Waiting for AER completion... 00:09:02.186 Failure: test_invalid_db_write_overflow_cq 00:09:02.186 00:09:02.186 18:19:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:02.186 18:19:50 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:02.186 [2024-10-08 18:19:50.954547] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 Executing: test_write_invalid_db 00:09:12.152 Waiting for AER completion... 00:09:12.152 Failure: test_write_invalid_db 00:09:12.152 00:09:12.152 Executing: test_invalid_db_write_overflow_sq 00:09:12.152 Waiting for AER completion... 00:09:12.152 Failure: test_invalid_db_write_overflow_sq 00:09:12.152 00:09:12.152 Executing: test_invalid_db_write_overflow_cq 00:09:12.152 Waiting for AER completion... 00:09:12.152 Failure: test_invalid_db_write_overflow_cq 00:09:12.152 00:09:12.152 00:09:12.152 real 0m40.213s 00:09:12.152 user 0m34.193s 00:09:12.152 sys 0m5.609s 00:09:12.152 18:20:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.152 18:20:00 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:12.152 ************************************ 00:09:12.152 END TEST nvme_doorbell_aers 00:09:12.152 ************************************ 00:09:12.152 18:20:00 nvme -- nvme/nvme.sh@97 -- # uname 00:09:12.152 18:20:00 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:12.152 18:20:00 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:12.152 18:20:00 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:12.152 18:20:00 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.152 18:20:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.152 ************************************ 00:09:12.152 START TEST nvme_multi_aen 00:09:12.152 ************************************ 00:09:12.152 18:20:00 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:12.152 [2024-10-08 18:20:00.990734] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.990796] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.990809] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.992183] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.992208] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.992216] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.993257] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.993277] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.993287] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.994560] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.994650] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.152 [2024-10-08 18:20:00.994706] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76427) is not found. Dropping the request. 00:09:12.410 Child process pid: 76953 00:09:12.410 [Child] Asynchronous Event Request test 00:09:12.410 [Child] Attached to 0000:00:10.0 00:09:12.410 [Child] Attached to 0000:00:11.0 00:09:12.410 [Child] Attached to 0000:00:13.0 00:09:12.410 [Child] Attached to 0000:00:12.0 00:09:12.410 [Child] Registering asynchronous event callbacks... 00:09:12.410 [Child] Getting orig temperature thresholds of all controllers 00:09:12.410 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:12.410 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 [Child] Cleaning up... 00:09:12.410 Asynchronous Event Request test 00:09:12.410 Attached to 0000:00:10.0 00:09:12.410 Attached to 0000:00:11.0 00:09:12.410 Attached to 0000:00:13.0 00:09:12.410 Attached to 0000:00:12.0 00:09:12.410 Reset controller to setup AER completions for this process 00:09:12.410 Registering asynchronous event callbacks... 00:09:12.410 Getting orig temperature thresholds of all controllers 00:09:12.410 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:12.410 Setting all controllers temperature threshold low to trigger AER 00:09:12.410 Waiting for all controllers temperature threshold to be set lower 00:09:12.410 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:12.410 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:12.410 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:12.410 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:12.410 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:12.410 Waiting for all controllers to trigger AER and reset threshold 00:09:12.410 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:12.410 Cleaning up... 00:09:12.410 00:09:12.410 real 0m0.370s 00:09:12.410 user 0m0.104s 00:09:12.410 sys 0m0.164s 00:09:12.410 18:20:01 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.410 18:20:01 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:12.410 ************************************ 00:09:12.410 END TEST nvme_multi_aen 00:09:12.410 ************************************ 00:09:12.410 18:20:01 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:12.410 18:20:01 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:12.410 18:20:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.410 18:20:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.410 ************************************ 00:09:12.410 START TEST nvme_startup 00:09:12.410 ************************************ 00:09:12.410 18:20:01 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:12.669 Initializing NVMe Controllers 00:09:12.669 Attached to 0000:00:10.0 00:09:12.669 Attached to 0000:00:11.0 00:09:12.669 Attached to 0000:00:13.0 00:09:12.669 Attached to 0000:00:12.0 00:09:12.669 Initialization complete. 00:09:12.669 Time used:131077.656 (us). 00:09:12.669 00:09:12.669 real 0m0.183s 00:09:12.669 user 0m0.052s 00:09:12.669 sys 0m0.087s 00:09:12.669 18:20:01 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:12.669 18:20:01 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:12.669 ************************************ 00:09:12.669 END TEST nvme_startup 00:09:12.669 ************************************ 00:09:12.669 18:20:01 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:12.669 18:20:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:12.669 18:20:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:12.669 18:20:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:12.669 ************************************ 00:09:12.669 START TEST nvme_multi_secondary 00:09:12.669 ************************************ 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77003 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77004 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:12.669 18:20:01 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:15.952 Initializing NVMe Controllers 00:09:15.953 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.953 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:15.953 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:15.953 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:15.953 Initialization complete. Launching workers. 00:09:15.953 ======================================================== 00:09:15.953 Latency(us) 00:09:15.953 Device Information : IOPS MiB/s Average min max 00:09:15.953 PCIE (0000:00:10.0) NSID 1 from core 1: 7299.97 28.52 2190.43 990.48 5911.00 00:09:15.953 PCIE (0000:00:11.0) NSID 1 from core 1: 7299.97 28.52 2191.42 1061.59 6206.65 00:09:15.953 PCIE (0000:00:13.0) NSID 1 from core 1: 7299.97 28.52 2191.41 1066.63 6627.93 00:09:15.953 PCIE (0000:00:12.0) NSID 1 from core 1: 7299.97 28.52 2191.42 977.03 6901.42 00:09:15.953 PCIE (0000:00:12.0) NSID 2 from core 1: 7299.97 28.52 2191.40 1018.53 6308.76 00:09:15.953 PCIE (0000:00:12.0) NSID 3 from core 1: 7299.97 28.52 2191.42 1045.34 6176.23 00:09:15.953 ======================================================== 00:09:15.953 Total : 43799.82 171.09 2191.25 977.03 6901.42 00:09:15.953 00:09:15.953 Initializing NVMe Controllers 00:09:15.953 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:15.953 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:15.953 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:15.953 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:15.953 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:15.953 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:15.953 Initialization complete. Launching workers. 00:09:15.953 ======================================================== 00:09:15.953 Latency(us) 00:09:15.953 Device Information : IOPS MiB/s Average min max 00:09:15.953 PCIE (0000:00:10.0) NSID 1 from core 2: 2887.44 11.28 5539.82 1438.57 14134.59 00:09:15.953 PCIE (0000:00:11.0) NSID 1 from core 2: 2887.44 11.28 5540.84 1425.57 16270.00 00:09:15.953 PCIE (0000:00:13.0) NSID 1 from core 2: 2887.44 11.28 5540.72 1376.32 14135.83 00:09:15.953 PCIE (0000:00:12.0) NSID 1 from core 2: 2887.44 11.28 5540.43 1340.90 14322.52 00:09:15.953 PCIE (0000:00:12.0) NSID 2 from core 2: 2887.44 11.28 5540.78 1213.72 18792.47 00:09:15.953 PCIE (0000:00:12.0) NSID 3 from core 2: 2887.44 11.28 5540.69 1120.98 15233.86 00:09:15.953 ======================================================== 00:09:15.953 Total : 17324.64 67.67 5540.55 1120.98 18792.47 00:09:15.953 00:09:16.211 18:20:04 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77003 00:09:18.122 Initializing NVMe Controllers 00:09:18.122 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:18.122 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:18.122 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:18.122 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:18.122 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:18.122 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:18.122 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:18.122 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:18.122 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:18.122 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:18.122 Initialization complete. Launching workers. 00:09:18.122 ======================================================== 00:09:18.122 Latency(us) 00:09:18.122 Device Information : IOPS MiB/s Average min max 00:09:18.122 PCIE (0000:00:10.0) NSID 1 from core 0: 10143.08 39.62 1576.25 760.94 6977.16 00:09:18.122 PCIE (0000:00:11.0) NSID 1 from core 0: 10143.08 39.62 1577.06 772.75 6833.71 00:09:18.122 PCIE (0000:00:13.0) NSID 1 from core 0: 10143.08 39.62 1577.04 663.95 5867.55 00:09:18.122 PCIE (0000:00:12.0) NSID 1 from core 0: 10143.08 39.62 1577.03 598.29 5872.19 00:09:18.122 PCIE (0000:00:12.0) NSID 2 from core 0: 10143.08 39.62 1577.01 494.45 6171.90 00:09:18.122 PCIE (0000:00:12.0) NSID 3 from core 0: 10143.08 39.62 1577.01 361.67 6528.18 00:09:18.122 ======================================================== 00:09:18.122 Total : 60858.47 237.73 1576.90 361.67 6977.16 00:09:18.122 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77004 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77073 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77074 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:18.122 18:20:06 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:21.435 Initializing NVMe Controllers 00:09:21.435 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.435 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:21.435 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:21.435 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:21.435 Initialization complete. Launching workers. 00:09:21.435 ======================================================== 00:09:21.435 Latency(us) 00:09:21.435 Device Information : IOPS MiB/s Average min max 00:09:21.435 PCIE (0000:00:10.0) NSID 1 from core 1: 4337.62 16.94 3687.03 775.53 9078.57 00:09:21.435 PCIE (0000:00:11.0) NSID 1 from core 1: 4337.62 16.94 3688.20 802.53 11634.45 00:09:21.435 PCIE (0000:00:13.0) NSID 1 from core 1: 4337.62 16.94 3688.66 801.31 9951.14 00:09:21.435 PCIE (0000:00:12.0) NSID 1 from core 1: 4337.62 16.94 3688.69 781.20 10586.65 00:09:21.435 PCIE (0000:00:12.0) NSID 2 from core 1: 4337.62 16.94 3689.07 793.55 10210.08 00:09:21.435 PCIE (0000:00:12.0) NSID 3 from core 1: 4337.62 16.94 3691.37 786.56 9723.06 00:09:21.435 ======================================================== 00:09:21.435 Total : 26025.73 101.66 3688.83 775.53 11634.45 00:09:21.435 00:09:21.435 Initializing NVMe Controllers 00:09:21.435 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:21.435 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:21.435 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:21.435 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:21.435 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:21.435 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:21.435 Initialization complete. Launching workers. 00:09:21.435 ======================================================== 00:09:21.435 Latency(us) 00:09:21.435 Device Information : IOPS MiB/s Average min max 00:09:21.435 PCIE (0000:00:10.0) NSID 1 from core 0: 3999.83 15.62 3998.28 967.86 8734.07 00:09:21.435 PCIE (0000:00:11.0) NSID 1 from core 0: 3999.83 15.62 3999.70 947.55 8743.01 00:09:21.435 PCIE (0000:00:13.0) NSID 1 from core 0: 3999.83 15.62 3999.55 999.46 9178.84 00:09:21.435 PCIE (0000:00:12.0) NSID 1 from core 0: 3999.83 15.62 3999.40 871.74 9053.12 00:09:21.435 PCIE (0000:00:12.0) NSID 2 from core 0: 3999.83 15.62 3999.26 782.82 9073.40 00:09:21.435 PCIE (0000:00:12.0) NSID 3 from core 0: 3999.83 15.62 3999.13 599.26 8570.26 00:09:21.435 ======================================================== 00:09:21.435 Total : 23998.99 93.75 3999.22 599.26 9178.84 00:09:21.435 00:09:23.351 Initializing NVMe Controllers 00:09:23.351 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.351 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.351 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.351 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.351 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:23.351 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:23.351 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:23.351 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:23.351 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:23.351 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:23.351 Initialization complete. Launching workers. 00:09:23.351 ======================================================== 00:09:23.351 Latency(us) 00:09:23.351 Device Information : IOPS MiB/s Average min max 00:09:23.351 PCIE (0000:00:10.0) NSID 1 from core 2: 2804.74 10.96 5703.02 893.22 27474.01 00:09:23.351 PCIE (0000:00:11.0) NSID 1 from core 2: 2804.74 10.96 5703.91 860.62 27238.81 00:09:23.351 PCIE (0000:00:13.0) NSID 1 from core 2: 2804.74 10.96 5704.06 866.83 27829.61 00:09:23.351 PCIE (0000:00:12.0) NSID 1 from core 2: 2804.74 10.96 5703.37 864.14 27778.33 00:09:23.351 PCIE (0000:00:12.0) NSID 2 from core 2: 2804.74 10.96 5703.81 925.39 32870.38 00:09:23.351 PCIE (0000:00:12.0) NSID 3 from core 2: 2804.74 10.96 5703.71 769.56 28040.58 00:09:23.351 ======================================================== 00:09:23.351 Total : 16828.46 65.74 5703.65 769.56 32870.38 00:09:23.351 00:09:23.351 18:20:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77073 00:09:23.351 18:20:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77074 00:09:23.351 00:09:23.351 real 0m10.538s 00:09:23.351 user 0m18.204s 00:09:23.351 sys 0m0.651s 00:09:23.351 18:20:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.351 18:20:12 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:23.351 ************************************ 00:09:23.351 END TEST nvme_multi_secondary 00:09:23.351 ************************************ 00:09:23.351 18:20:12 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:23.351 18:20:12 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:23.351 18:20:12 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/76035 ]] 00:09:23.351 18:20:12 nvme -- common/autotest_common.sh@1090 -- # kill 76035 00:09:23.351 18:20:12 nvme -- common/autotest_common.sh@1091 -- # wait 76035 00:09:23.351 [2024-10-08 18:20:12.055846] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.351 [2024-10-08 18:20:12.055942] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.351 [2024-10-08 18:20:12.055966] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.351 [2024-10-08 18:20:12.055985] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.351 [2024-10-08 18:20:12.057006] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.057074] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.057095] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.057113] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.058138] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.058210] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.058236] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.058253] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.059240] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.059295] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.059318] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 [2024-10-08 18:20:12.059336] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76952) is not found. Dropping the request. 00:09:23.352 18:20:12 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:23.352 18:20:12 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:23.352 18:20:12 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:23.352 18:20:12 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:23.352 18:20:12 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.352 18:20:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:23.352 ************************************ 00:09:23.352 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:23.352 ************************************ 00:09:23.352 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:23.613 * Looking for test storage... 00:09:23.613 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:23.613 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:23.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.614 --rc genhtml_branch_coverage=1 00:09:23.614 --rc genhtml_function_coverage=1 00:09:23.614 --rc genhtml_legend=1 00:09:23.614 --rc geninfo_all_blocks=1 00:09:23.614 --rc geninfo_unexecuted_blocks=1 00:09:23.614 00:09:23.614 ' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:23.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.614 --rc genhtml_branch_coverage=1 00:09:23.614 --rc genhtml_function_coverage=1 00:09:23.614 --rc genhtml_legend=1 00:09:23.614 --rc geninfo_all_blocks=1 00:09:23.614 --rc geninfo_unexecuted_blocks=1 00:09:23.614 00:09:23.614 ' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:23.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.614 --rc genhtml_branch_coverage=1 00:09:23.614 --rc genhtml_function_coverage=1 00:09:23.614 --rc genhtml_legend=1 00:09:23.614 --rc geninfo_all_blocks=1 00:09:23.614 --rc geninfo_unexecuted_blocks=1 00:09:23.614 00:09:23.614 ' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:23.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.614 --rc genhtml_branch_coverage=1 00:09:23.614 --rc genhtml_function_coverage=1 00:09:23.614 --rc genhtml_legend=1 00:09:23.614 --rc geninfo_all_blocks=1 00:09:23.614 --rc geninfo_unexecuted_blocks=1 00:09:23.614 00:09:23.614 ' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:23.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77236 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77236 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 77236 ']' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.614 18:20:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:23.614 [2024-10-08 18:20:12.431140] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:09:23.614 [2024-10-08 18:20:12.431255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77236 ] 00:09:23.876 [2024-10-08 18:20:12.583598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.876 [2024-10-08 18:20:12.604497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:23.876 [2024-10-08 18:20:12.639846] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.876 [2024-10-08 18:20:12.640038] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.876 [2024-10-08 18:20:12.640291] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.876 [2024-10-08 18:20:12.640356] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.446 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.446 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:24.446 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:24.446 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.446 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.707 nvme0n1 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_iUYYj.txt 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:24.707 true 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1728411613 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77259 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:24.707 18:20:13 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.622 [2024-10-08 18:20:15.347760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:26.622 [2024-10-08 18:20:15.350102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:26.622 [2024-10-08 18:20:15.350245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:26.622 [2024-10-08 18:20:15.350686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:26.622 [2024-10-08 18:20:15.354586] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.622 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77259 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77259 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77259 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:26.622 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_iUYYj.txt 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_iUYYj.txt 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77236 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 77236 ']' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 77236 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77236 00:09:26.623 killing process with pid 77236 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77236' 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 77236 00:09:26.623 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 77236 00:09:27.195 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:27.195 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:27.195 00:09:27.195 real 0m3.587s 00:09:27.195 user 0m12.663s 00:09:27.195 sys 0m0.487s 00:09:27.195 ************************************ 00:09:27.195 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:27.195 ************************************ 00:09:27.195 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.195 18:20:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:27.195 18:20:15 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:27.195 18:20:15 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:27.195 18:20:15 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:27.195 18:20:15 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.195 18:20:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.195 ************************************ 00:09:27.195 START TEST nvme_fio 00:09:27.195 ************************************ 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:27.195 18:20:15 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:27.195 18:20:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:27.456 18:20:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:27.456 18:20:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:27.456 18:20:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:27.456 18:20:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:27.456 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:27.717 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:27.717 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:27.717 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:27.717 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:27.717 18:20:16 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:27.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:27.717 fio-3.35 00:09:27.717 Starting 1 thread 00:09:33.012 00:09:33.012 test: (groupid=0, jobs=1): err= 0: pid=77382: Tue Oct 8 18:20:21 2024 00:09:33.012 read: IOPS=19.7k, BW=77.1MiB/s (80.9MB/s)(154MiB/2001msec) 00:09:33.012 slat (nsec): min=4225, max=95849, avg=5286.20, stdev=2572.53 00:09:33.012 clat (usec): min=205, max=16501, avg=3236.91, stdev=1074.98 00:09:33.012 lat (usec): min=210, max=16567, avg=3242.20, stdev=1076.12 00:09:33.012 clat percentiles (usec): 00:09:33.012 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:33.012 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2999], 00:09:33.012 | 70.00th=[ 3359], 80.00th=[ 3982], 90.00th=[ 4817], 95.00th=[ 5473], 00:09:33.012 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[10159], 99.95th=[13042], 00:09:33.012 | 99.99th=[16319] 00:09:33.012 bw ( KiB/s): min=71984, max=84776, per=99.57%, avg=78626.67, stdev=6410.25, samples=3 00:09:33.012 iops : min=17996, max=21194, avg=19656.67, stdev=1602.56, samples=3 00:09:33.012 write: IOPS=19.7k, BW=77.0MiB/s (80.7MB/s)(154MiB/2001msec); 0 zone resets 00:09:33.012 slat (nsec): min=4308, max=64865, avg=5647.08, stdev=2543.94 00:09:33.012 clat (usec): min=221, max=16379, avg=3231.28, stdev=1065.56 00:09:33.012 lat (usec): min=226, max=16392, avg=3236.92, stdev=1066.68 00:09:33.012 clat percentiles (usec): 00:09:33.012 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:33.012 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 2999], 00:09:33.012 | 70.00th=[ 3359], 80.00th=[ 3949], 90.00th=[ 4752], 95.00th=[ 5407], 00:09:33.012 | 99.00th=[ 6521], 99.50th=[ 6783], 99.90th=[10552], 99.95th=[13304], 00:09:33.012 | 99.99th=[15926] 00:09:33.012 bw ( KiB/s): min=71904, max=85056, per=99.88%, avg=78720.00, stdev=6589.13, samples=3 00:09:33.012 iops : min=17976, max=21264, avg=19680.00, stdev=1647.28, samples=3 00:09:33.012 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:33.012 lat (msec) : 2=0.42%, 4=79.92%, 10=19.49%, 20=0.12% 00:09:33.012 cpu : usr=99.00%, sys=0.10%, ctx=4, majf=0, minf=624 00:09:33.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:33.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:33.012 issued rwts: total=39503,39427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.012 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:33.012 00:09:33.012 Run status group 0 (all jobs): 00:09:33.012 READ: bw=77.1MiB/s (80.9MB/s), 77.1MiB/s-77.1MiB/s (80.9MB/s-80.9MB/s), io=154MiB (162MB), run=2001-2001msec 00:09:33.012 WRITE: bw=77.0MiB/s (80.7MB/s), 77.0MiB/s-77.0MiB/s (80.7MB/s-80.7MB/s), io=154MiB (161MB), run=2001-2001msec 00:09:33.012 ----------------------------------------------------- 00:09:33.012 Suppressions used: 00:09:33.012 count bytes template 00:09:33.012 1 32 /usr/src/fio/parse.c 00:09:33.012 1 8 libtcmalloc_minimal.so 00:09:33.012 ----------------------------------------------------- 00:09:33.012 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:33.012 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:33.273 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:33.273 18:20:21 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:33.273 18:20:21 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:33.273 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:33.273 fio-3.35 00:09:33.273 Starting 1 thread 00:09:39.856 00:09:39.856 test: (groupid=0, jobs=1): err= 0: pid=77439: Tue Oct 8 18:20:28 2024 00:09:39.856 read: IOPS=22.2k, BW=86.7MiB/s (90.9MB/s)(174MiB/2001msec) 00:09:39.856 slat (nsec): min=4782, max=72196, avg=5635.84, stdev=1812.61 00:09:39.856 clat (usec): min=230, max=10577, avg=2880.46, stdev=729.02 00:09:39.856 lat (usec): min=236, max=10629, avg=2886.10, stdev=730.03 00:09:39.856 clat percentiles (usec): 00:09:39.856 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:39.856 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:39.856 | 70.00th=[ 2868], 80.00th=[ 2999], 90.00th=[ 3490], 95.00th=[ 4293], 00:09:39.856 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 7504], 99.95th=[ 8586], 00:09:39.856 | 99.99th=[10421] 00:09:39.856 bw ( KiB/s): min=86464, max=92768, per=100.00%, avg=90058.67, stdev=3243.91, samples=3 00:09:39.856 iops : min=21616, max=23192, avg=22514.67, stdev=810.98, samples=3 00:09:39.856 write: IOPS=22.0k, BW=86.1MiB/s (90.3MB/s)(172MiB/2001msec); 0 zone resets 00:09:39.856 slat (nsec): min=4933, max=43020, avg=5914.79, stdev=1779.00 00:09:39.856 clat (usec): min=299, max=10499, avg=2885.40, stdev=723.78 00:09:39.856 lat (usec): min=305, max=10516, avg=2891.31, stdev=724.77 00:09:39.856 clat percentiles (usec): 00:09:39.856 | 1.00th=[ 2180], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:39.856 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2769], 00:09:39.856 | 70.00th=[ 2868], 80.00th=[ 3032], 90.00th=[ 3458], 95.00th=[ 4293], 00:09:39.856 | 99.00th=[ 6259], 99.50th=[ 6783], 99.90th=[ 7701], 99.95th=[ 8717], 00:09:39.856 | 99.99th=[10290] 00:09:39.856 bw ( KiB/s): min=88472, max=92520, per=100.00%, avg=90274.67, stdev=2059.99, samples=3 00:09:39.856 iops : min=22118, max=23130, avg=22568.67, stdev=515.00, samples=3 00:09:39.856 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:39.856 lat (msec) : 2=0.20%, 4=93.71%, 10=6.03%, 20=0.02% 00:09:39.856 cpu : usr=99.25%, sys=0.10%, ctx=3, majf=0, minf=624 00:09:39.856 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:39.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:39.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:39.857 issued rwts: total=44419,44099,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:39.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:39.857 00:09:39.857 Run status group 0 (all jobs): 00:09:39.857 READ: bw=86.7MiB/s (90.9MB/s), 86.7MiB/s-86.7MiB/s (90.9MB/s-90.9MB/s), io=174MiB (182MB), run=2001-2001msec 00:09:39.857 WRITE: bw=86.1MiB/s (90.3MB/s), 86.1MiB/s-86.1MiB/s (90.3MB/s-90.3MB/s), io=172MiB (181MB), run=2001-2001msec 00:09:39.857 ----------------------------------------------------- 00:09:39.857 Suppressions used: 00:09:39.857 count bytes template 00:09:39.857 1 32 /usr/src/fio/parse.c 00:09:39.857 1 8 libtcmalloc_minimal.so 00:09:39.857 ----------------------------------------------------- 00:09:39.857 00:09:39.857 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:39.857 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:39.857 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:39.857 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:40.117 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:40.117 18:20:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:40.378 18:20:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:40.378 18:20:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:40.378 18:20:29 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:40.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:40.638 fio-3.35 00:09:40.638 Starting 1 thread 00:09:47.229 00:09:47.229 test: (groupid=0, jobs=1): err= 0: pid=77500: Tue Oct 8 18:20:35 2024 00:09:47.229 read: IOPS=21.0k, BW=82.2MiB/s (86.2MB/s)(165MiB/2001msec) 00:09:47.229 slat (nsec): min=4782, max=63732, avg=5897.14, stdev=2386.60 00:09:47.229 clat (usec): min=281, max=14151, avg=3037.95, stdev=962.92 00:09:47.229 lat (usec): min=286, max=14215, avg=3043.85, stdev=964.35 00:09:47.229 clat percentiles (usec): 00:09:47.229 | 1.00th=[ 2212], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:47.229 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:47.229 | 70.00th=[ 2933], 80.00th=[ 3163], 90.00th=[ 3916], 95.00th=[ 5276], 00:09:47.229 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 8717], 99.95th=[10814], 00:09:47.229 | 99.99th=[13960] 00:09:47.229 bw ( KiB/s): min=82744, max=85600, per=100.00%, avg=84466.67, stdev=1516.47, samples=3 00:09:47.229 iops : min=20686, max=21400, avg=21116.67, stdev=379.12, samples=3 00:09:47.229 write: IOPS=20.9k, BW=81.7MiB/s (85.7MB/s)(164MiB/2001msec); 0 zone resets 00:09:47.229 slat (nsec): min=4898, max=58313, avg=6163.94, stdev=2365.25 00:09:47.229 clat (usec): min=308, max=13975, avg=3039.59, stdev=952.82 00:09:47.229 lat (usec): min=314, max=13991, avg=3045.75, stdev=954.20 00:09:47.229 clat percentiles (usec): 00:09:47.229 | 1.00th=[ 2245], 5.00th=[ 2409], 10.00th=[ 2474], 20.00th=[ 2540], 00:09:47.229 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2835], 00:09:47.229 | 70.00th=[ 2933], 80.00th=[ 3163], 90.00th=[ 3884], 95.00th=[ 5211], 00:09:47.229 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 8979], 99.95th=[11338], 00:09:47.229 | 99.99th=[13698] 00:09:47.229 bw ( KiB/s): min=83280, max=85632, per=100.00%, avg=84600.00, stdev=1202.16, samples=3 00:09:47.229 iops : min=20820, max=21408, avg=21150.00, stdev=300.54, samples=3 00:09:47.229 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:47.229 lat (msec) : 2=0.13%, 4=90.46%, 10=9.31%, 20=0.06% 00:09:47.229 cpu : usr=99.10%, sys=0.20%, ctx=5, majf=0, minf=625 00:09:47.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:47.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.229 issued rwts: total=42113,41867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.229 00:09:47.229 Run status group 0 (all jobs): 00:09:47.229 READ: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=165MiB (172MB), run=2001-2001msec 00:09:47.229 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=164MiB (171MB), run=2001-2001msec 00:09:47.229 ----------------------------------------------------- 00:09:47.229 Suppressions used: 00:09:47.229 count bytes template 00:09:47.229 1 32 /usr/src/fio/parse.c 00:09:47.229 1 8 libtcmalloc_minimal.so 00:09:47.229 ----------------------------------------------------- 00:09:47.229 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:47.229 18:20:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:47.491 18:20:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:47.491 18:20:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:47.491 18:20:36 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:47.491 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:47.491 fio-3.35 00:09:47.491 Starting 1 thread 00:09:52.780 00:09:52.780 test: (groupid=0, jobs=1): err= 0: pid=77555: Tue Oct 8 18:20:41 2024 00:09:52.780 read: IOPS=16.5k, BW=64.5MiB/s (67.6MB/s)(129MiB/2001msec) 00:09:52.780 slat (usec): min=5, max=104, avg= 7.32, stdev= 2.87 00:09:52.780 clat (usec): min=581, max=10782, avg=3844.67, stdev=1065.10 00:09:52.780 lat (usec): min=588, max=10831, avg=3851.99, stdev=1066.18 00:09:52.780 clat percentiles (usec): 00:09:52.780 | 1.00th=[ 2671], 5.00th=[ 2868], 10.00th=[ 2999], 20.00th=[ 3130], 00:09:52.780 | 30.00th=[ 3261], 40.00th=[ 3392], 50.00th=[ 3490], 60.00th=[ 3654], 00:09:52.780 | 70.00th=[ 3818], 80.00th=[ 4228], 90.00th=[ 5473], 95.00th=[ 6325], 00:09:52.780 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8717], 99.95th=[ 9896], 00:09:52.780 | 99.99th=[10683] 00:09:52.780 bw ( KiB/s): min=55872, max=72608, per=100.00%, avg=66161.00, stdev=9005.23, samples=3 00:09:52.780 iops : min=13968, max=18152, avg=16540.00, stdev=2251.15, samples=3 00:09:52.780 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(129MiB/2001msec); 0 zone resets 00:09:52.780 slat (nsec): min=5204, max=85816, avg=7623.30, stdev=2842.33 00:09:52.780 clat (usec): min=570, max=10701, avg=3870.74, stdev=1066.18 00:09:52.780 lat (usec): min=577, max=10717, avg=3878.36, stdev=1067.24 00:09:52.780 clat percentiles (usec): 00:09:52.780 | 1.00th=[ 2704], 5.00th=[ 2900], 10.00th=[ 2999], 20.00th=[ 3163], 00:09:52.780 | 30.00th=[ 3294], 40.00th=[ 3392], 50.00th=[ 3523], 60.00th=[ 3654], 00:09:52.780 | 70.00th=[ 3851], 80.00th=[ 4293], 90.00th=[ 5473], 95.00th=[ 6390], 00:09:52.780 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[ 8848], 99.95th=[10028], 00:09:52.780 | 99.99th=[10552] 00:09:52.780 bw ( KiB/s): min=55432, max=72928, per=99.83%, avg=66092.00, stdev=9353.86, samples=3 00:09:52.780 iops : min=13858, max=18232, avg=16523.00, stdev=2338.47, samples=3 00:09:52.780 lat (usec) : 750=0.01%, 1000=0.01% 00:09:52.780 lat (msec) : 2=0.04%, 4=75.79%, 10=24.10%, 20=0.05% 00:09:52.780 cpu : usr=98.95%, sys=0.05%, ctx=4, majf=0, minf=624 00:09:52.780 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:52.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:52.780 issued rwts: total=33047,33119,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.780 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:52.780 00:09:52.780 Run status group 0 (all jobs): 00:09:52.780 READ: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=129MiB (135MB), run=2001-2001msec 00:09:52.780 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=129MiB (136MB), run=2001-2001msec 00:09:52.780 ----------------------------------------------------- 00:09:52.780 Suppressions used: 00:09:52.780 count bytes template 00:09:52.780 1 32 /usr/src/fio/parse.c 00:09:52.780 1 8 libtcmalloc_minimal.so 00:09:52.780 ----------------------------------------------------- 00:09:52.780 00:09:52.780 ************************************ 00:09:52.780 END TEST nvme_fio 00:09:52.780 ************************************ 00:09:52.780 18:20:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:52.780 18:20:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:52.780 00:09:52.780 real 0m25.707s 00:09:52.780 user 0m20.415s 00:09:52.780 sys 0m6.857s 00:09:52.780 18:20:41 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.780 18:20:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:52.780 00:09:52.780 real 1m33.368s 00:09:52.780 user 3m35.824s 00:09:52.780 sys 0m17.307s 00:09:52.780 ************************************ 00:09:52.780 END TEST nvme 00:09:52.780 ************************************ 00:09:52.780 18:20:41 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.780 18:20:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:52.780 18:20:41 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:52.780 18:20:41 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:52.780 18:20:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:52.780 18:20:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.780 18:20:41 -- common/autotest_common.sh@10 -- # set +x 00:09:53.041 ************************************ 00:09:53.041 START TEST nvme_scc 00:09:53.041 ************************************ 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:53.041 * Looking for test storage... 00:09:53.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.041 18:20:41 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:53.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.041 --rc genhtml_branch_coverage=1 00:09:53.041 --rc genhtml_function_coverage=1 00:09:53.041 --rc genhtml_legend=1 00:09:53.041 --rc geninfo_all_blocks=1 00:09:53.041 --rc geninfo_unexecuted_blocks=1 00:09:53.041 00:09:53.041 ' 00:09:53.041 18:20:41 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.042 --rc genhtml_branch_coverage=1 00:09:53.042 --rc genhtml_function_coverage=1 00:09:53.042 --rc genhtml_legend=1 00:09:53.042 --rc geninfo_all_blocks=1 00:09:53.042 --rc geninfo_unexecuted_blocks=1 00:09:53.042 00:09:53.042 ' 00:09:53.042 18:20:41 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.042 --rc genhtml_branch_coverage=1 00:09:53.042 --rc genhtml_function_coverage=1 00:09:53.042 --rc genhtml_legend=1 00:09:53.042 --rc geninfo_all_blocks=1 00:09:53.042 --rc geninfo_unexecuted_blocks=1 00:09:53.042 00:09:53.042 ' 00:09:53.042 18:20:41 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:53.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.042 --rc genhtml_branch_coverage=1 00:09:53.042 --rc genhtml_function_coverage=1 00:09:53.042 --rc genhtml_legend=1 00:09:53.042 --rc geninfo_all_blocks=1 00:09:53.042 --rc geninfo_unexecuted_blocks=1 00:09:53.042 00:09:53.042 ' 00:09:53.042 18:20:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.042 18:20:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.042 18:20:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.042 18:20:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.042 18:20:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.042 18:20:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.042 18:20:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.042 18:20:41 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.042 18:20:41 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:53.042 18:20:41 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:53.042 18:20:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:53.042 18:20:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:53.042 18:20:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:53.042 18:20:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:53.042 18:20:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:53.042 18:20:41 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:53.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:53.563 Waiting for block devices as requested 00:09:53.563 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.825 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.825 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.825 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:59.164 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:59.164 18:20:47 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:59.164 18:20:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:59.164 18:20:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:59.164 18:20:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.164 18:20:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.164 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.165 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:59.166 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.167 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:59.168 18:20:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:59.168 18:20:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:59.168 18:20:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.168 18:20:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:59.168 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.169 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:59.170 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.171 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:59.172 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:59.173 18:20:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:59.173 18:20:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:59.173 18:20:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.173 18:20:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.173 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.174 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.175 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.176 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.177 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.178 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:59.179 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:59.180 18:20:47 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:59.180 18:20:47 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:59.180 18:20:47 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:59.180 18:20:47 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.180 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.181 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:59.182 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:59.183 18:20:47 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:59.183 18:20:47 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:59.183 18:20:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:59.445 18:20:48 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:59.445 18:20:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:59.445 18:20:48 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:59.445 18:20:48 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:59.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:00.287 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.287 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.287 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.549 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:00.549 18:20:49 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:00.549 18:20:49 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:00.549 18:20:49 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.549 18:20:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:00.549 ************************************ 00:10:00.549 START TEST nvme_simple_copy 00:10:00.549 ************************************ 00:10:00.549 18:20:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:00.811 Initializing NVMe Controllers 00:10:00.811 Attaching to 0000:00:10.0 00:10:00.811 Controller supports SCC. Attached to 0000:00:10.0 00:10:00.811 Namespace ID: 1 size: 6GB 00:10:00.811 Initialization complete. 00:10:00.811 00:10:00.811 Controller QEMU NVMe Ctrl (12340 ) 00:10:00.811 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:00.811 Namespace Block Size:4096 00:10:00.811 Writing LBAs 0 to 63 with Random Data 00:10:00.811 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:00.811 LBAs matching Written Data: 64 00:10:00.811 00:10:00.811 real 0m0.264s 00:10:00.811 user 0m0.096s 00:10:00.811 sys 0m0.066s 00:10:00.811 ************************************ 00:10:00.811 END TEST nvme_simple_copy 00:10:00.811 ************************************ 00:10:00.811 18:20:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.811 18:20:49 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:00.811 00:10:00.811 real 0m7.904s 00:10:00.811 user 0m1.071s 00:10:00.811 sys 0m1.583s 00:10:00.811 ************************************ 00:10:00.811 END TEST nvme_scc 00:10:00.811 ************************************ 00:10:00.811 18:20:49 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.811 18:20:49 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:00.811 18:20:49 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:00.811 18:20:49 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:00.811 18:20:49 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:00.811 18:20:49 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:00.811 18:20:49 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:00.811 18:20:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:00.811 18:20:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.811 18:20:49 -- common/autotest_common.sh@10 -- # set +x 00:10:00.811 ************************************ 00:10:00.811 START TEST nvme_fdp 00:10:00.811 ************************************ 00:10:00.811 18:20:49 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:01.072 * Looking for test storage... 00:10:01.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.072 --rc genhtml_branch_coverage=1 00:10:01.072 --rc genhtml_function_coverage=1 00:10:01.072 --rc genhtml_legend=1 00:10:01.072 --rc geninfo_all_blocks=1 00:10:01.072 --rc geninfo_unexecuted_blocks=1 00:10:01.072 00:10:01.072 ' 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.072 --rc genhtml_branch_coverage=1 00:10:01.072 --rc genhtml_function_coverage=1 00:10:01.072 --rc genhtml_legend=1 00:10:01.072 --rc geninfo_all_blocks=1 00:10:01.072 --rc geninfo_unexecuted_blocks=1 00:10:01.072 00:10:01.072 ' 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.072 --rc genhtml_branch_coverage=1 00:10:01.072 --rc genhtml_function_coverage=1 00:10:01.072 --rc genhtml_legend=1 00:10:01.072 --rc geninfo_all_blocks=1 00:10:01.072 --rc geninfo_unexecuted_blocks=1 00:10:01.072 00:10:01.072 ' 00:10:01.072 18:20:49 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:01.072 --rc genhtml_branch_coverage=1 00:10:01.072 --rc genhtml_function_coverage=1 00:10:01.072 --rc genhtml_legend=1 00:10:01.072 --rc geninfo_all_blocks=1 00:10:01.072 --rc geninfo_unexecuted_blocks=1 00:10:01.072 00:10:01.072 ' 00:10:01.072 18:20:49 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.072 18:20:49 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.072 18:20:49 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.072 18:20:49 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.072 18:20:49 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.072 18:20:49 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:01.072 18:20:49 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:01.072 18:20:49 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:01.072 18:20:49 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.072 18:20:49 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:01.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.594 Waiting for block devices as requested 00:10:01.594 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.594 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.854 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:01.854 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.152 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:07.152 18:20:55 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:07.152 18:20:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:07.152 18:20:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:07.152 18:20:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.152 18:20:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:07.152 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.153 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.154 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:07.155 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.156 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:07.157 18:20:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:07.157 18:20:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:07.157 18:20:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.157 18:20:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.157 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.158 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.159 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.160 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:07.161 18:20:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:07.161 18:20:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:07.161 18:20:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.161 18:20:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.161 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.162 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:07.163 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.164 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.165 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.166 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:07.167 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.168 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:07.169 18:20:55 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:07.169 18:20:55 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:07.169 18:20:55 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.169 18:20:55 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:07.169 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:07.170 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:07.171 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:07.172 18:20:55 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:07.172 18:20:55 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:07.172 18:20:55 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:07.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:08.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.312 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.312 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:08.573 18:20:57 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:08.573 18:20:57 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:08.573 18:20:57 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.573 18:20:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:08.573 ************************************ 00:10:08.573 START TEST nvme_flexible_data_placement 00:10:08.573 ************************************ 00:10:08.573 18:20:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:08.833 Initializing NVMe Controllers 00:10:08.833 Attaching to 0000:00:13.0 00:10:08.833 Controller supports FDP Attached to 0000:00:13.0 00:10:08.833 Namespace ID: 1 Endurance Group ID: 1 00:10:08.833 Initialization complete. 00:10:08.833 00:10:08.833 ================================== 00:10:08.833 == FDP tests for Namespace: #01 == 00:10:08.833 ================================== 00:10:08.833 00:10:08.833 Get Feature: FDP: 00:10:08.833 ================= 00:10:08.833 Enabled: Yes 00:10:08.833 FDP configuration Index: 0 00:10:08.833 00:10:08.833 FDP configurations log page 00:10:08.833 =========================== 00:10:08.833 Number of FDP configurations: 1 00:10:08.833 Version: 0 00:10:08.833 Size: 112 00:10:08.833 FDP Configuration Descriptor: 0 00:10:08.833 Descriptor Size: 96 00:10:08.833 Reclaim Group Identifier format: 2 00:10:08.833 FDP Volatile Write Cache: Not Present 00:10:08.834 FDP Configuration: Valid 00:10:08.834 Vendor Specific Size: 0 00:10:08.834 Number of Reclaim Groups: 2 00:10:08.834 Number of Recalim Unit Handles: 8 00:10:08.834 Max Placement Identifiers: 128 00:10:08.834 Number of Namespaces Suppprted: 256 00:10:08.834 Reclaim unit Nominal Size: 6000000 bytes 00:10:08.834 Estimated Reclaim Unit Time Limit: Not Reported 00:10:08.834 RUH Desc #000: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #001: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #002: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #003: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #004: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #005: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #006: RUH Type: Initially Isolated 00:10:08.834 RUH Desc #007: RUH Type: Initially Isolated 00:10:08.834 00:10:08.834 FDP reclaim unit handle usage log page 00:10:08.834 ====================================== 00:10:08.834 Number of Reclaim Unit Handles: 8 00:10:08.834 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:08.834 RUH Usage Desc #001: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #002: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #003: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #004: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #005: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #006: RUH Attributes: Unused 00:10:08.834 RUH Usage Desc #007: RUH Attributes: Unused 00:10:08.834 00:10:08.834 FDP statistics log page 00:10:08.834 ======================= 00:10:08.834 Host bytes with metadata written: 1893900288 00:10:08.834 Media bytes with metadata written: 1894940672 00:10:08.834 Media bytes erased: 0 00:10:08.834 00:10:08.834 FDP Reclaim unit handle status 00:10:08.834 ============================== 00:10:08.834 Number of RUHS descriptors: 2 00:10:08.834 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000011d6 00:10:08.834 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:08.834 00:10:08.834 FDP write on placement id: 0 success 00:10:08.834 00:10:08.834 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:08.834 00:10:08.834 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:08.834 00:10:08.834 Get Feature: FDP Events for Placement handle: #0 00:10:08.834 ======================== 00:10:08.834 Number of FDP Events: 6 00:10:08.834 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:08.834 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:08.834 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:08.834 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:08.834 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:08.834 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:08.834 00:10:08.834 FDP events log page 00:10:08.834 =================== 00:10:08.834 Number of FDP events: 1 00:10:08.834 FDP Event #0: 00:10:08.834 Event Type: RU Not Written to Capacity 00:10:08.834 Placement Identifier: Valid 00:10:08.834 NSID: Valid 00:10:08.834 Location: Valid 00:10:08.834 Placement Identifier: 0 00:10:08.834 Event Timestamp: 4 00:10:08.834 Namespace Identifier: 1 00:10:08.834 Reclaim Group Identifier: 0 00:10:08.834 Reclaim Unit Handle Identifier: 0 00:10:08.834 00:10:08.834 FDP test passed 00:10:08.834 00:10:08.834 real 0m0.232s 00:10:08.834 user 0m0.048s 00:10:08.834 sys 0m0.081s 00:10:08.834 18:20:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.834 ************************************ 00:10:08.834 END TEST nvme_flexible_data_placement 00:10:08.834 ************************************ 00:10:08.834 18:20:57 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:08.834 ************************************ 00:10:08.834 END TEST nvme_fdp 00:10:08.834 ************************************ 00:10:08.834 00:10:08.834 real 0m7.885s 00:10:08.834 user 0m1.034s 00:10:08.834 sys 0m1.594s 00:10:08.834 18:20:57 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:08.834 18:20:57 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:08.834 18:20:57 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:08.834 18:20:57 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:08.834 18:20:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:08.834 18:20:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:08.834 18:20:57 -- common/autotest_common.sh@10 -- # set +x 00:10:08.834 ************************************ 00:10:08.834 START TEST nvme_rpc 00:10:08.834 ************************************ 00:10:08.834 18:20:57 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:08.834 * Looking for test storage... 00:10:08.834 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.834 18:20:57 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:08.834 18:20:57 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:08.834 18:20:57 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.095 18:20:57 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.095 --rc genhtml_branch_coverage=1 00:10:09.095 --rc genhtml_function_coverage=1 00:10:09.095 --rc genhtml_legend=1 00:10:09.095 --rc geninfo_all_blocks=1 00:10:09.095 --rc geninfo_unexecuted_blocks=1 00:10:09.095 00:10:09.095 ' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.095 --rc genhtml_branch_coverage=1 00:10:09.095 --rc genhtml_function_coverage=1 00:10:09.095 --rc genhtml_legend=1 00:10:09.095 --rc geninfo_all_blocks=1 00:10:09.095 --rc geninfo_unexecuted_blocks=1 00:10:09.095 00:10:09.095 ' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.095 --rc genhtml_branch_coverage=1 00:10:09.095 --rc genhtml_function_coverage=1 00:10:09.095 --rc genhtml_legend=1 00:10:09.095 --rc geninfo_all_blocks=1 00:10:09.095 --rc geninfo_unexecuted_blocks=1 00:10:09.095 00:10:09.095 ' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:09.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.095 --rc genhtml_branch_coverage=1 00:10:09.095 --rc genhtml_function_coverage=1 00:10:09.095 --rc genhtml_legend=1 00:10:09.095 --rc geninfo_all_blocks=1 00:10:09.095 --rc geninfo_unexecuted_blocks=1 00:10:09.095 00:10:09.095 ' 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78925 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:09.095 18:20:57 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78925 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78925 ']' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:09.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:09.095 18:20:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.095 [2024-10-08 18:20:57.886870] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:09.095 [2024-10-08 18:20:57.887020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78925 ] 00:10:09.355 [2024-10-08 18:20:58.021391] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:09.355 [2024-10-08 18:20:58.040338] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:09.355 [2024-10-08 18:20:58.091359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.355 [2024-10-08 18:20:58.091446] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.929 18:20:58 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.929 18:20:58 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:09.929 18:20:58 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:10.189 Nvme0n1 00:10:10.189 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:10.189 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:10.450 request: 00:10:10.450 { 00:10:10.450 "bdev_name": "Nvme0n1", 00:10:10.450 "filename": "non_existing_file", 00:10:10.450 "method": "bdev_nvme_apply_firmware", 00:10:10.450 "req_id": 1 00:10:10.450 } 00:10:10.450 Got JSON-RPC error response 00:10:10.450 response: 00:10:10.450 { 00:10:10.450 "code": -32603, 00:10:10.450 "message": "open file failed." 00:10:10.450 } 00:10:10.450 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:10.450 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:10.450 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:10.710 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:10.710 18:20:59 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78925 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78925 ']' 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78925 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78925 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.710 killing process with pid 78925 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78925' 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78925 00:10:10.710 18:20:59 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78925 00:10:10.971 00:10:10.971 real 0m2.247s 00:10:10.971 user 0m4.228s 00:10:10.971 sys 0m0.602s 00:10:10.971 18:20:59 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.971 ************************************ 00:10:10.971 END TEST nvme_rpc 00:10:10.971 18:20:59 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.971 ************************************ 00:10:11.231 18:20:59 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:11.231 18:20:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.231 18:20:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.231 18:20:59 -- common/autotest_common.sh@10 -- # set +x 00:10:11.231 ************************************ 00:10:11.231 START TEST nvme_rpc_timeouts 00:10:11.231 ************************************ 00:10:11.231 18:20:59 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:11.231 * Looking for test storage... 00:10:11.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:11.231 18:20:59 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:11.231 18:20:59 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:11.231 18:20:59 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:11.231 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:11.231 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.231 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.232 18:21:00 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.232 --rc genhtml_branch_coverage=1 00:10:11.232 --rc genhtml_function_coverage=1 00:10:11.232 --rc genhtml_legend=1 00:10:11.232 --rc geninfo_all_blocks=1 00:10:11.232 --rc geninfo_unexecuted_blocks=1 00:10:11.232 00:10:11.232 ' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.232 --rc genhtml_branch_coverage=1 00:10:11.232 --rc genhtml_function_coverage=1 00:10:11.232 --rc genhtml_legend=1 00:10:11.232 --rc geninfo_all_blocks=1 00:10:11.232 --rc geninfo_unexecuted_blocks=1 00:10:11.232 00:10:11.232 ' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.232 --rc genhtml_branch_coverage=1 00:10:11.232 --rc genhtml_function_coverage=1 00:10:11.232 --rc genhtml_legend=1 00:10:11.232 --rc geninfo_all_blocks=1 00:10:11.232 --rc geninfo_unexecuted_blocks=1 00:10:11.232 00:10:11.232 ' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:11.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.232 --rc genhtml_branch_coverage=1 00:10:11.232 --rc genhtml_function_coverage=1 00:10:11.232 --rc genhtml_legend=1 00:10:11.232 --rc geninfo_all_blocks=1 00:10:11.232 --rc geninfo_unexecuted_blocks=1 00:10:11.232 00:10:11.232 ' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_78979 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_78979 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79011 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79011 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 79011 ']' 00:10:11.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.232 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.232 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:11.492 [2024-10-08 18:21:00.116059] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:10:11.492 [2024-10-08 18:21:00.116210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79011 ] 00:10:11.492 [2024-10-08 18:21:00.250738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:11.492 [2024-10-08 18:21:00.271395] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:11.492 [2024-10-08 18:21:00.323296] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.492 [2024-10-08 18:21:00.323340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.134 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.134 18:21:00 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:12.134 Checking default timeout settings: 00:10:12.135 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:12.135 18:21:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:12.706 Making settings changes with rpc: 00:10:12.706 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:12.706 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:12.706 Check default vs. modified settings: 00:10:12.706 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:12.706 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 Setting action_on_timeout is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 Setting timeout_us is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:13.281 Setting timeout_admin_us is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_78979 /tmp/settings_modified_78979 00:10:13.281 18:21:01 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79011 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 79011 ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 79011 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79011 00:10:13.281 killing process with pid 79011 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79011' 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 79011 00:10:13.281 18:21:01 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 79011 00:10:13.542 RPC TIMEOUT SETTING TEST PASSED. 00:10:13.542 18:21:02 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:13.542 ************************************ 00:10:13.542 END TEST nvme_rpc_timeouts 00:10:13.542 ************************************ 00:10:13.542 00:10:13.542 real 0m2.389s 00:10:13.542 user 0m4.658s 00:10:13.542 sys 0m0.616s 00:10:13.542 18:21:02 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.542 18:21:02 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:13.542 18:21:02 -- spdk/autotest.sh@239 -- # uname -s 00:10:13.542 18:21:02 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:13.542 18:21:02 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:13.542 18:21:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:13.542 18:21:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.542 18:21:02 -- common/autotest_common.sh@10 -- # set +x 00:10:13.542 ************************************ 00:10:13.542 START TEST sw_hotplug 00:10:13.542 ************************************ 00:10:13.542 18:21:02 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:13.542 * Looking for test storage... 00:10:13.803 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:13.803 18:21:02 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.803 --rc genhtml_branch_coverage=1 00:10:13.803 --rc genhtml_function_coverage=1 00:10:13.803 --rc genhtml_legend=1 00:10:13.803 --rc geninfo_all_blocks=1 00:10:13.803 --rc geninfo_unexecuted_blocks=1 00:10:13.803 00:10:13.803 ' 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.803 --rc genhtml_branch_coverage=1 00:10:13.803 --rc genhtml_function_coverage=1 00:10:13.803 --rc genhtml_legend=1 00:10:13.803 --rc geninfo_all_blocks=1 00:10:13.803 --rc geninfo_unexecuted_blocks=1 00:10:13.803 00:10:13.803 ' 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.803 --rc genhtml_branch_coverage=1 00:10:13.803 --rc genhtml_function_coverage=1 00:10:13.803 --rc genhtml_legend=1 00:10:13.803 --rc geninfo_all_blocks=1 00:10:13.803 --rc geninfo_unexecuted_blocks=1 00:10:13.803 00:10:13.803 ' 00:10:13.803 18:21:02 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:13.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:13.803 --rc genhtml_branch_coverage=1 00:10:13.803 --rc genhtml_function_coverage=1 00:10:13.803 --rc genhtml_legend=1 00:10:13.803 --rc geninfo_all_blocks=1 00:10:13.803 --rc geninfo_unexecuted_blocks=1 00:10:13.803 00:10:13.803 ' 00:10:13.803 18:21:02 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:14.063 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.324 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:14.324 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:14.324 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:14.324 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:14.324 18:21:02 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:14.324 18:21:02 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:14.324 18:21:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:14.324 18:21:02 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:14.324 18:21:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:14.325 18:21:02 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:14.325 18:21:03 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:14.325 18:21:03 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:14.325 18:21:03 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:14.325 18:21:03 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:14.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.848 Waiting for block devices as requested 00:10:14.848 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.848 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:14.848 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.109 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:20.401 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:20.401 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:20.401 18:21:08 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:20.663 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:20.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:20.663 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:20.922 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:21.182 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.182 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79861 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:21.443 18:21:10 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:21.443 18:21:10 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:21.703 Initializing NVMe Controllers 00:10:21.703 Attaching to 0000:00:10.0 00:10:21.703 Attaching to 0000:00:11.0 00:10:21.703 Attached to 0000:00:10.0 00:10:21.703 Attached to 0000:00:11.0 00:10:21.703 Initialization complete. Starting I/O... 00:10:21.703 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:21.703 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:21.703 00:10:22.648 QEMU NVMe Ctrl (12340 ): 2564 I/Os completed (+2564) 00:10:22.648 QEMU NVMe Ctrl (12341 ): 2564 I/Os completed (+2564) 00:10:22.648 00:10:23.591 QEMU NVMe Ctrl (12340 ): 5705 I/Os completed (+3141) 00:10:23.591 QEMU NVMe Ctrl (12341 ): 5701 I/Os completed (+3137) 00:10:23.591 00:10:24.529 QEMU NVMe Ctrl (12340 ): 9247 I/Os completed (+3542) 00:10:24.529 QEMU NVMe Ctrl (12341 ): 9213 I/Os completed (+3512) 00:10:24.529 00:10:25.903 QEMU NVMe Ctrl (12340 ): 13630 I/Os completed (+4383) 00:10:25.903 QEMU NVMe Ctrl (12341 ): 13534 I/Os completed (+4321) 00:10:25.903 00:10:26.837 QEMU NVMe Ctrl (12340 ): 18218 I/Os completed (+4588) 00:10:26.837 QEMU NVMe Ctrl (12341 ): 18081 I/Os completed (+4547) 00:10:26.837 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.403 [2024-10-08 18:21:16.145309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:27.403 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:27.403 [2024-10-08 18:21:16.146196] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.146259] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.146285] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.146308] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.403 [2024-10-08 18:21:16.147552] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.147645] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.147707] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.147732] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/device 00:10:27.403 EAL: Scan for (pci) bus failed. 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.403 [2024-10-08 18:21:16.166122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:27.403 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:27.403 [2024-10-08 18:21:16.166916] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.167008] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.167023] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.167036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.403 [2024-10-08 18:21:16.168114] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.168145] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.168157] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 [2024-10-08 18:21:16.168167] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.403 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:27.403 EAL: Scan for (pci) bus failed. 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:27.403 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:27.662 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:27.662 Attaching to 0000:00:10.0 00:10:27.662 Attached to 0000:00:10.0 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.662 18:21:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:27.662 Attaching to 0000:00:11.0 00:10:27.662 Attached to 0000:00:11.0 00:10:28.596 QEMU NVMe Ctrl (12340 ): 4696 I/Os completed (+4696) 00:10:28.596 QEMU NVMe Ctrl (12341 ): 4290 I/Os completed (+4290) 00:10:28.596 00:10:29.529 QEMU NVMe Ctrl (12340 ): 9343 I/Os completed (+4647) 00:10:29.529 QEMU NVMe Ctrl (12341 ): 8925 I/Os completed (+4635) 00:10:29.530 00:10:30.903 QEMU NVMe Ctrl (12340 ): 13750 I/Os completed (+4407) 00:10:30.903 QEMU NVMe Ctrl (12341 ): 13214 I/Os completed (+4289) 00:10:30.903 00:10:31.494 QEMU NVMe Ctrl (12340 ): 18020 I/Os completed (+4270) 00:10:31.494 QEMU NVMe Ctrl (12341 ): 17371 I/Os completed (+4157) 00:10:31.494 00:10:32.882 QEMU NVMe Ctrl (12340 ): 21065 I/Os completed (+3045) 00:10:32.882 QEMU NVMe Ctrl (12341 ): 20407 I/Os completed (+3036) 00:10:32.882 00:10:33.831 QEMU NVMe Ctrl (12340 ): 23808 I/Os completed (+2743) 00:10:33.831 QEMU NVMe Ctrl (12341 ): 23168 I/Os completed (+2761) 00:10:33.831 00:10:34.770 QEMU NVMe Ctrl (12340 ): 26929 I/Os completed (+3121) 00:10:34.770 QEMU NVMe Ctrl (12341 ): 26366 I/Os completed (+3198) 00:10:34.770 00:10:35.710 QEMU NVMe Ctrl (12340 ): 30424 I/Os completed (+3495) 00:10:35.710 QEMU NVMe Ctrl (12341 ): 30463 I/Os completed (+4097) 00:10:35.710 00:10:36.654 QEMU NVMe Ctrl (12340 ): 33309 I/Os completed (+2885) 00:10:36.654 QEMU NVMe Ctrl (12341 ): 33359 I/Os completed (+2896) 00:10:36.654 00:10:37.597 QEMU NVMe Ctrl (12340 ): 37558 I/Os completed (+4249) 00:10:37.597 QEMU NVMe Ctrl (12341 ): 37616 I/Os completed (+4257) 00:10:37.597 00:10:38.537 QEMU NVMe Ctrl (12340 ): 41789 I/Os completed (+4231) 00:10:38.537 QEMU NVMe Ctrl (12341 ): 41862 I/Os completed (+4246) 00:10:38.537 00:10:39.916 QEMU NVMe Ctrl (12340 ): 45911 I/Os completed (+4122) 00:10:39.916 QEMU NVMe Ctrl (12341 ): 46022 I/Os completed (+4160) 00:10:39.916 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.916 [2024-10-08 18:21:28.429281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:39.916 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:39.916 [2024-10-08 18:21:28.430585] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.430731] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.430782] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.430848] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:39.916 [2024-10-08 18:21:28.432528] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.432670] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.432708] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.432734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:39.916 [2024-10-08 18:21:28.448580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:39.916 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:39.916 [2024-10-08 18:21:28.449726] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.450349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.450454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.450519] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:39.916 [2024-10-08 18:21:28.453178] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.453288] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.453345] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 [2024-10-08 18:21:28.453405] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:39.916 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:39.916 EAL: Scan for (pci) bus failed. 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:39.916 Attaching to 0000:00:10.0 00:10:39.916 Attached to 0000:00:10.0 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:39.916 18:21:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:39.916 Attaching to 0000:00:11.0 00:10:39.916 Attached to 0000:00:11.0 00:10:40.488 QEMU NVMe Ctrl (12340 ): 1864 I/Os completed (+1864) 00:10:40.488 QEMU NVMe Ctrl (12341 ): 1699 I/Os completed (+1699) 00:10:40.488 00:10:41.912 QEMU NVMe Ctrl (12340 ): 4451 I/Os completed (+2587) 00:10:41.912 QEMU NVMe Ctrl (12341 ): 4305 I/Os completed (+2606) 00:10:41.912 00:10:42.845 QEMU NVMe Ctrl (12340 ): 7892 I/Os completed (+3441) 00:10:42.845 QEMU NVMe Ctrl (12341 ): 7781 I/Os completed (+3476) 00:10:42.845 00:10:43.778 QEMU NVMe Ctrl (12340 ): 11600 I/Os completed (+3708) 00:10:43.778 QEMU NVMe Ctrl (12341 ): 11435 I/Os completed (+3654) 00:10:43.778 00:10:44.717 QEMU NVMe Ctrl (12340 ): 16459 I/Os completed (+4859) 00:10:44.717 QEMU NVMe Ctrl (12341 ): 15522 I/Os completed (+4087) 00:10:44.717 00:10:45.672 QEMU NVMe Ctrl (12340 ): 19336 I/Os completed (+2877) 00:10:45.673 QEMU NVMe Ctrl (12341 ): 18454 I/Os completed (+2932) 00:10:45.673 00:10:46.617 QEMU NVMe Ctrl (12340 ): 21664 I/Os completed (+2328) 00:10:46.617 QEMU NVMe Ctrl (12341 ): 20796 I/Os completed (+2342) 00:10:46.617 00:10:47.560 QEMU NVMe Ctrl (12340 ): 24163 I/Os completed (+2499) 00:10:47.560 QEMU NVMe Ctrl (12341 ): 23312 I/Os completed (+2516) 00:10:47.560 00:10:48.612 QEMU NVMe Ctrl (12340 ): 26543 I/Os completed (+2380) 00:10:48.612 QEMU NVMe Ctrl (12341 ): 25704 I/Os completed (+2392) 00:10:48.612 00:10:49.555 QEMU NVMe Ctrl (12340 ): 29979 I/Os completed (+3436) 00:10:49.555 QEMU NVMe Ctrl (12341 ): 29140 I/Os completed (+3436) 00:10:49.555 00:10:50.492 QEMU NVMe Ctrl (12340 ): 33968 I/Os completed (+3989) 00:10:50.492 QEMU NVMe Ctrl (12341 ): 33033 I/Os completed (+3893) 00:10:50.492 00:10:51.879 QEMU NVMe Ctrl (12340 ): 36824 I/Os completed (+2856) 00:10:51.879 QEMU NVMe Ctrl (12341 ): 35889 I/Os completed (+2856) 00:10:51.879 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.140 [2024-10-08 18:21:40.738654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:52.140 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:52.140 [2024-10-08 18:21:40.742111] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.742241] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.742299] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.742340] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:52.140 [2024-10-08 18:21:40.744851] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.744930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.744955] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.744974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.140 [2024-10-08 18:21:40.767268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:52.140 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:52.140 [2024-10-08 18:21:40.768725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.768954] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.768981] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.769001] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:52.140 [2024-10-08 18:21:40.770610] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.770791] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.770881] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 [2024-10-08 18:21:40.770920] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.140 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.401 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.401 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.401 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.401 18:21:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.401 Attaching to 0000:00:10.0 00:10:52.401 Attached to 0000:00:10.0 00:10:52.401 18:21:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.401 18:21:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.401 18:21:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.401 Attaching to 0000:00:11.0 00:10:52.401 Attached to 0000:00:11.0 00:10:52.401 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:52.401 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:52.401 [2024-10-08 18:21:41.087690] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:04.635 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:04.635 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.635 18:21:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.94 00:11:04.635 18:21:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.94 00:11:04.635 18:21:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:04.635 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:11:04.635 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:11:04.635 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 18:21:53 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79861 00:11:11.223 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79861) - No such process 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79861 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80405 00:11:11.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80405 00:11:11.223 18:21:59 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80405 ']' 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.223 18:21:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.223 [2024-10-08 18:21:59.190592] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:11:11.223 [2024-10-08 18:21:59.190780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80405 ] 00:11:11.223 [2024-10-08 18:21:59.327307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:11.223 [2024-10-08 18:21:59.344455] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.223 [2024-10-08 18:21:59.415911] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:11.223 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.223 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:11.223 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:11.223 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:11.223 18:22:00 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:11.484 18:22:00 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:11.484 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:11.484 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:11.484 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:11.484 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:11.484 18:22:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.065 [2024-10-08 18:22:06.160748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:18.065 [2024-10-08 18:22:06.162024] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.162051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.162062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.162080] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.162087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.162101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.162108] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.162118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.162128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.162136] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.162142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.162150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.065 [2024-10-08 18:22:06.660758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:18.065 [2024-10-08 18:22:06.661800] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.661824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.661836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.661847] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.661855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.661862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.661870] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.661876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.661886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 [2024-10-08 18:22:06.661892] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.065 [2024-10-08 18:22:06.661900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:18.065 [2024-10-08 18:22:06.661906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:18.065 18:22:06 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:18.065 18:22:06 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:18.631 18:22:07 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.631 18:22:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:18.631 18:22:07 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:18.631 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:18.889 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.889 18:22:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.110 18:22:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.110 18:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.110 18:22:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.110 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.111 18:22:19 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.111 18:22:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.111 [2024-10-08 18:22:19.560960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:31.111 [2024-10-08 18:22:19.562202] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.111 [2024-10-08 18:22:19.562310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.111 [2024-10-08 18:22:19.562373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.111 [2024-10-08 18:22:19.562408] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.111 [2024-10-08 18:22:19.562426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.111 [2024-10-08 18:22:19.562452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.111 [2024-10-08 18:22:19.562460] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.111 [2024-10-08 18:22:19.562469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.111 [2024-10-08 18:22:19.562476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.111 [2024-10-08 18:22:19.562485] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.111 [2024-10-08 18:22:19.562492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.111 [2024-10-08 18:22:19.562502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.111 18:22:19 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:31.111 18:22:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:31.372 [2024-10-08 18:22:19.960954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:31.372 [2024-10-08 18:22:19.962033] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.372 [2024-10-08 18:22:19.962062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.372 [2024-10-08 18:22:19.962073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.372 [2024-10-08 18:22:19.962083] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.372 [2024-10-08 18:22:19.962091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.372 [2024-10-08 18:22:19.962099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.372 [2024-10-08 18:22:19.962107] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.372 [2024-10-08 18:22:19.962114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.372 [2024-10-08 18:22:19.962122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.372 [2024-10-08 18:22:19.962129] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.372 [2024-10-08 18:22:19.962138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.372 [2024-10-08 18:22:19.962145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.372 18:22:20 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.372 18:22:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.372 18:22:20 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.372 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.633 18:22:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.079 [2024-10-08 18:22:32.461179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:44.079 [2024-10-08 18:22:32.463019] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.463150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.463237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.463281] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.463304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.463336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.463400] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.463427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.463458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.463503] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:44.079 [2024-10-08 18:22:32.463944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.464310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.079 18:22:32 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:44.079 18:22:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:44.079 [2024-10-08 18:22:32.861188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:44.079 [2024-10-08 18:22:32.863036] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.863197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.863287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.863325] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.863353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.863384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.863417] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.863490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.863527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.079 [2024-10-08 18:22:32.863558] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:44.079 [2024-10-08 18:22:32.863960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:44.079 [2024-10-08 18:22:32.864411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:44.341 18:22:33 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.341 18:22:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:44.341 18:22:33 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.341 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:44.601 18:22:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:56.822 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.31 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.31 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.31 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.31 2 00:11:56.823 remove_attach_helper took 45.31s to complete (handling 2 nvme drive(s)) 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:56.823 18:22:45 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:56.823 18:22:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.476 18:22:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.476 18:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.476 18:22:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:03.476 [2024-10-08 18:22:51.498433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:03.476 [2024-10-08 18:22:51.499306] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.499337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.499350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.499367] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.499375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.499384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.499391] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.499402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.499409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.499418] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.499425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.499434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.476 18:22:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.476 18:22:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.476 18:22:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.476 [2024-10-08 18:22:51.998423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:03.476 [2024-10-08 18:22:51.999346] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.999375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.999386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.999397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.999406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.999413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.999422] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.999428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.999436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 [2024-10-08 18:22:51.999443] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.476 [2024-10-08 18:22:51.999455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.476 [2024-10-08 18:22:51.999461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.476 18:22:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.476 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:03.476 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.737 18:22:52 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.737 18:22:52 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.737 18:22:52 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:03.737 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.998 18:22:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.233 18:23:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:16.233 18:23:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:16.233 [2024-10-08 18:23:04.898641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:16.233 [2024-10-08 18:23:04.899494] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.233 [2024-10-08 18:23:04.899524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.233 [2024-10-08 18:23:04.899536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.233 [2024-10-08 18:23:04.899553] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.233 [2024-10-08 18:23:04.899561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.233 [2024-10-08 18:23:04.899570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.233 [2024-10-08 18:23:04.899577] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.233 [2024-10-08 18:23:04.899585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.233 [2024-10-08 18:23:04.899592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.233 [2024-10-08 18:23:04.899601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.233 [2024-10-08 18:23:04.899608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.233 [2024-10-08 18:23:04.899617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.494 [2024-10-08 18:23:05.298637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:16.494 [2024-10-08 18:23:05.299543] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.494 [2024-10-08 18:23:05.299574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.494 [2024-10-08 18:23:05.299587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.494 [2024-10-08 18:23:05.299598] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.494 [2024-10-08 18:23:05.299608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.494 [2024-10-08 18:23:05.299615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.494 [2024-10-08 18:23:05.299625] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.494 [2024-10-08 18:23:05.299632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.494 [2024-10-08 18:23:05.299642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.494 [2024-10-08 18:23:05.299649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.494 [2024-10-08 18:23:05.299658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.494 [2024-10-08 18:23:05.299665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.755 18:23:05 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.755 18:23:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.755 18:23:05 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:16.755 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:17.015 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:17.015 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.015 18:23:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:29.253 18:23:17 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:29.253 18:23:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:29.253 [2024-10-08 18:23:17.798835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:29.253 [2024-10-08 18:23:17.799727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.253 [2024-10-08 18:23:17.799842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.253 [2024-10-08 18:23:17.799946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.253 [2024-10-08 18:23:17.799983] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.253 [2024-10-08 18:23:17.800001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.253 [2024-10-08 18:23:17.800028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.253 [2024-10-08 18:23:17.800088] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.253 [2024-10-08 18:23:17.800110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.253 [2024-10-08 18:23:17.800134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.253 [2024-10-08 18:23:17.800158] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.253 [2024-10-08 18:23:17.800174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.253 [2024-10-08 18:23:17.800450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.514 [2024-10-08 18:23:18.198827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:29.514 [2024-10-08 18:23:18.199671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.514 [2024-10-08 18:23:18.199779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.514 [2024-10-08 18:23:18.199852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.514 [2024-10-08 18:23:18.199883] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.514 [2024-10-08 18:23:18.199904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.514 [2024-10-08 18:23:18.199930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.514 [2024-10-08 18:23:18.199957] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.514 [2024-10-08 18:23:18.199974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.514 [2024-10-08 18:23:18.200035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.514 [2024-10-08 18:23:18.200059] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:29.514 [2024-10-08 18:23:18.200077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.514 [2024-10-08 18:23:18.200099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:29.514 18:23:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.514 18:23:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:29.514 18:23:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:29.514 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:29.776 18:23:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:12:42.081 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:42.081 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80405 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80405 ']' 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80405 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80405 00:12:42.081 killing process with pid 80405 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80405' 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80405 00:12:42.081 18:23:30 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80405 00:12:42.346 18:23:30 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:42.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:43.181 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:43.181 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:43.181 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.181 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:43.181 ************************************ 00:12:43.181 END TEST sw_hotplug 00:12:43.181 ************************************ 00:12:43.181 00:12:43.181 real 2m29.666s 00:12:43.181 user 1m50.370s 00:12:43.181 sys 0m17.819s 00:12:43.181 18:23:31 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:43.181 18:23:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.444 18:23:32 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:43.444 18:23:32 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:43.444 18:23:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:43.444 18:23:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.444 18:23:32 -- common/autotest_common.sh@10 -- # set +x 00:12:43.444 ************************************ 00:12:43.444 START TEST nvme_xnvme 00:12:43.444 ************************************ 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:43.444 * Looking for test storage... 00:12:43.444 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.444 --rc genhtml_branch_coverage=1 00:12:43.444 --rc genhtml_function_coverage=1 00:12:43.444 --rc genhtml_legend=1 00:12:43.444 --rc geninfo_all_blocks=1 00:12:43.444 --rc geninfo_unexecuted_blocks=1 00:12:43.444 00:12:43.444 ' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.444 --rc genhtml_branch_coverage=1 00:12:43.444 --rc genhtml_function_coverage=1 00:12:43.444 --rc genhtml_legend=1 00:12:43.444 --rc geninfo_all_blocks=1 00:12:43.444 --rc geninfo_unexecuted_blocks=1 00:12:43.444 00:12:43.444 ' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.444 --rc genhtml_branch_coverage=1 00:12:43.444 --rc genhtml_function_coverage=1 00:12:43.444 --rc genhtml_legend=1 00:12:43.444 --rc geninfo_all_blocks=1 00:12:43.444 --rc geninfo_unexecuted_blocks=1 00:12:43.444 00:12:43.444 ' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:43.444 --rc genhtml_branch_coverage=1 00:12:43.444 --rc genhtml_function_coverage=1 00:12:43.444 --rc genhtml_legend=1 00:12:43.444 --rc geninfo_all_blocks=1 00:12:43.444 --rc geninfo_unexecuted_blocks=1 00:12:43.444 00:12:43.444 ' 00:12:43.444 18:23:32 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.444 18:23:32 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.444 18:23:32 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.444 18:23:32 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.444 18:23:32 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.444 18:23:32 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:43.444 18:23:32 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.444 18:23:32 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:43.444 18:23:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.444 ************************************ 00:12:43.444 START TEST xnvme_to_malloc_dd_copy 00:12:43.444 ************************************ 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.444 18:23:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.705 { 00:12:43.705 "subsystems": [ 00:12:43.705 { 00:12:43.705 "subsystem": "bdev", 00:12:43.705 "config": [ 00:12:43.705 { 00:12:43.705 "params": { 00:12:43.705 "block_size": 512, 00:12:43.705 "num_blocks": 2097152, 00:12:43.705 "name": "malloc0" 00:12:43.705 }, 00:12:43.705 "method": "bdev_malloc_create" 00:12:43.705 }, 00:12:43.705 { 00:12:43.705 "params": { 00:12:43.705 "io_mechanism": "libaio", 00:12:43.705 "filename": "/dev/nullb0", 00:12:43.705 "name": "null0" 00:12:43.706 }, 00:12:43.706 "method": "bdev_xnvme_create" 00:12:43.706 }, 00:12:43.706 { 00:12:43.706 "method": "bdev_wait_for_examine" 00:12:43.706 } 00:12:43.706 ] 00:12:43.706 } 00:12:43.706 ] 00:12:43.706 } 00:12:43.706 [2024-10-08 18:23:32.338444] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:43.706 [2024-10-08 18:23:32.338834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81788 ] 00:12:43.706 [2024-10-08 18:23:32.475105] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:43.706 [2024-10-08 18:23:32.488622] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.967 [2024-10-08 18:23:32.563641] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.351  [2024-10-08T18:23:35.143Z] Copying: 217/1024 [MB] (217 MBps) [2024-10-08T18:23:36.086Z] Copying: 434/1024 [MB] (217 MBps) [2024-10-08T18:23:37.029Z] Copying: 721/1024 [MB] (286 MBps) [2024-10-08T18:23:37.029Z] Copying: 1021/1024 [MB] (299 MBps) [2024-10-08T18:23:37.600Z] Copying: 1024/1024 [MB] (average 255 MBps) 00:12:48.750 00:12:48.750 18:23:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:48.750 18:23:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:48.750 18:23:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:48.750 18:23:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:48.750 { 00:12:48.750 "subsystems": [ 00:12:48.750 { 00:12:48.750 "subsystem": "bdev", 00:12:48.750 "config": [ 00:12:48.750 { 00:12:48.750 "params": { 00:12:48.750 "block_size": 512, 00:12:48.750 "num_blocks": 2097152, 00:12:48.750 "name": "malloc0" 00:12:48.750 }, 00:12:48.750 "method": "bdev_malloc_create" 00:12:48.750 }, 00:12:48.750 { 00:12:48.750 "params": { 00:12:48.750 "io_mechanism": "libaio", 00:12:48.750 "filename": "/dev/nullb0", 00:12:48.750 "name": "null0" 00:12:48.750 }, 00:12:48.750 "method": "bdev_xnvme_create" 00:12:48.750 }, 00:12:48.750 { 00:12:48.750 "method": "bdev_wait_for_examine" 00:12:48.750 } 00:12:48.750 ] 00:12:48.750 } 00:12:48.750 ] 00:12:48.750 } 00:12:48.750 [2024-10-08 18:23:37.505005] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:48.750 [2024-10-08 18:23:37.505238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81853 ] 00:12:49.011 [2024-10-08 18:23:37.635129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:49.011 [2024-10-08 18:23:37.654917] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.011 [2024-10-08 18:23:37.702194] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.395  [2024-10-08T18:23:40.186Z] Copying: 302/1024 [MB] (302 MBps) [2024-10-08T18:23:41.127Z] Copying: 606/1024 [MB] (303 MBps) [2024-10-08T18:23:41.698Z] Copying: 909/1024 [MB] (302 MBps) [2024-10-08T18:23:41.957Z] Copying: 1024/1024 [MB] (average 303 MBps) 00:12:53.107 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:53.107 18:23:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:53.107 { 00:12:53.107 "subsystems": [ 00:12:53.107 { 00:12:53.107 "subsystem": "bdev", 00:12:53.107 "config": [ 00:12:53.107 { 00:12:53.107 "params": { 00:12:53.107 "block_size": 512, 00:12:53.107 "num_blocks": 2097152, 00:12:53.107 "name": "malloc0" 00:12:53.107 }, 00:12:53.107 "method": "bdev_malloc_create" 00:12:53.107 }, 00:12:53.107 { 00:12:53.107 "params": { 00:12:53.107 "io_mechanism": "io_uring", 00:12:53.107 "filename": "/dev/nullb0", 00:12:53.107 "name": "null0" 00:12:53.107 }, 00:12:53.107 "method": "bdev_xnvme_create" 00:12:53.107 }, 00:12:53.107 { 00:12:53.107 "method": "bdev_wait_for_examine" 00:12:53.107 } 00:12:53.107 ] 00:12:53.107 } 00:12:53.107 ] 00:12:53.107 } 00:12:53.107 [2024-10-08 18:23:41.902631] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:53.107 [2024-10-08 18:23:41.902791] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81907 ] 00:12:53.366 [2024-10-08 18:23:42.032873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:53.366 [2024-10-08 18:23:42.051878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.366 [2024-10-08 18:23:42.103483] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.748  [2024-10-08T18:23:44.535Z] Copying: 312/1024 [MB] (312 MBps) [2024-10-08T18:23:45.473Z] Copying: 624/1024 [MB] (312 MBps) [2024-10-08T18:23:45.733Z] Copying: 937/1024 [MB] (312 MBps) [2024-10-08T18:23:46.302Z] Copying: 1024/1024 [MB] (average 312 MBps) 00:12:57.452 00:12:57.452 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:57.452 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:57.452 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:57.452 18:23:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.452 { 00:12:57.452 "subsystems": [ 00:12:57.452 { 00:12:57.452 "subsystem": "bdev", 00:12:57.452 "config": [ 00:12:57.452 { 00:12:57.452 "params": { 00:12:57.452 "block_size": 512, 00:12:57.452 "num_blocks": 2097152, 00:12:57.452 "name": "malloc0" 00:12:57.452 }, 00:12:57.452 "method": "bdev_malloc_create" 00:12:57.452 }, 00:12:57.452 { 00:12:57.452 "params": { 00:12:57.452 "io_mechanism": "io_uring", 00:12:57.452 "filename": "/dev/nullb0", 00:12:57.452 "name": "null0" 00:12:57.452 }, 00:12:57.452 "method": "bdev_xnvme_create" 00:12:57.452 }, 00:12:57.452 { 00:12:57.452 "method": "bdev_wait_for_examine" 00:12:57.452 } 00:12:57.452 ] 00:12:57.452 } 00:12:57.452 ] 00:12:57.452 } 00:12:57.452 [2024-10-08 18:23:46.181020] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:12:57.452 [2024-10-08 18:23:46.181310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81961 ] 00:12:57.712 [2024-10-08 18:23:46.311928] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:57.712 [2024-10-08 18:23:46.329952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.712 [2024-10-08 18:23:46.372781] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.097  [2024-10-08T18:23:48.894Z] Copying: 315/1024 [MB] (315 MBps) [2024-10-08T18:23:49.876Z] Copying: 632/1024 [MB] (316 MBps) [2024-10-08T18:23:50.137Z] Copying: 949/1024 [MB] (316 MBps) [2024-10-08T18:23:50.399Z] Copying: 1024/1024 [MB] (average 316 MBps) 00:13:01.549 00:13:01.549 18:23:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:01.549 18:23:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:01.549 00:13:01.549 real 0m18.142s 00:13:01.549 user 0m14.743s 00:13:01.549 sys 0m2.911s 00:13:01.549 18:23:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:01.549 ************************************ 00:13:01.549 END TEST xnvme_to_malloc_dd_copy 00:13:01.549 ************************************ 00:13:01.549 18:23:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:01.810 18:23:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:01.810 18:23:50 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:01.810 18:23:50 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:01.810 18:23:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.810 ************************************ 00:13:01.810 START TEST xnvme_bdevperf 00:13:01.810 ************************************ 00:13:01.810 18:23:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:01.810 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:01.810 18:23:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:01.810 18:23:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.811 18:23:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.811 { 00:13:01.811 "subsystems": [ 00:13:01.811 { 00:13:01.811 "subsystem": "bdev", 00:13:01.811 "config": [ 00:13:01.811 { 00:13:01.811 "params": { 00:13:01.811 "io_mechanism": "libaio", 00:13:01.811 "filename": "/dev/nullb0", 00:13:01.811 "name": "null0" 00:13:01.811 }, 00:13:01.811 "method": "bdev_xnvme_create" 00:13:01.811 }, 00:13:01.811 { 00:13:01.811 "method": "bdev_wait_for_examine" 00:13:01.811 } 00:13:01.811 ] 00:13:01.811 } 00:13:01.811 ] 00:13:01.811 } 00:13:01.811 [2024-10-08 18:23:50.527648] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:01.811 [2024-10-08 18:23:50.527776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82038 ] 00:13:01.811 [2024-10-08 18:23:50.656694] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:02.072 [2024-10-08 18:23:50.675481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.072 [2024-10-08 18:23:50.741711] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.072 Running I/O for 5 seconds... 00:13:04.399 152320.00 IOPS, 595.00 MiB/s [2024-10-08T18:23:54.190Z] 168672.00 IOPS, 658.88 MiB/s [2024-10-08T18:23:55.132Z] 181397.33 IOPS, 708.58 MiB/s [2024-10-08T18:23:56.075Z] 187504.00 IOPS, 732.44 MiB/s 00:13:07.225 Latency(us) 00:13:07.225 [2024-10-08T18:23:56.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.225 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.225 null0 : 5.00 191098.99 746.48 0.00 0.00 332.48 111.85 2760.07 00:13:07.225 [2024-10-08T18:23:56.075Z] =================================================================================================================== 00:13:07.225 [2024-10-08T18:23:56.075Z] Total : 191098.99 746.48 0.00 0.00 332.48 111.85 2760.07 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.486 18:23:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.486 { 00:13:07.486 "subsystems": [ 00:13:07.486 { 00:13:07.486 "subsystem": "bdev", 00:13:07.486 "config": [ 00:13:07.486 { 00:13:07.486 "params": { 00:13:07.486 "io_mechanism": "io_uring", 00:13:07.486 "filename": "/dev/nullb0", 00:13:07.486 "name": "null0" 00:13:07.486 }, 00:13:07.486 "method": "bdev_xnvme_create" 00:13:07.486 }, 00:13:07.486 { 00:13:07.486 "method": "bdev_wait_for_examine" 00:13:07.486 } 00:13:07.486 ] 00:13:07.486 } 00:13:07.486 ] 00:13:07.486 } 00:13:07.486 [2024-10-08 18:23:56.139797] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:07.486 [2024-10-08 18:23:56.139923] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82107 ] 00:13:07.486 [2024-10-08 18:23:56.270739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:07.486 [2024-10-08 18:23:56.290785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.746 [2024-10-08 18:23:56.364514] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.746 Running I/O for 5 seconds... 00:13:10.075 174528.00 IOPS, 681.75 MiB/s [2024-10-08T18:23:59.866Z] 185568.00 IOPS, 724.88 MiB/s [2024-10-08T18:24:00.808Z] 202410.67 IOPS, 790.67 MiB/s [2024-10-08T18:24:01.746Z] 210560.00 IOPS, 822.50 MiB/s [2024-10-08T18:24:01.746Z] 215628.80 IOPS, 842.30 MiB/s 00:13:12.896 Latency(us) 00:13:12.896 [2024-10-08T18:24:01.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.896 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:12.896 null0 : 5.00 215562.19 842.04 0.00 0.00 294.59 264.66 2016.49 00:13:12.896 [2024-10-08T18:24:01.746Z] =================================================================================================================== 00:13:12.896 [2024-10-08T18:24:01.746Z] Total : 215562.19 842.04 0.00 0.00 294.59 264.66 2016.49 00:13:12.896 18:24:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:12.896 18:24:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:12.896 00:13:12.896 real 0m11.279s 00:13:12.896 user 0m8.699s 00:13:12.896 sys 0m2.345s 00:13:12.896 ************************************ 00:13:12.896 END TEST xnvme_bdevperf 00:13:12.896 ************************************ 00:13:12.896 18:24:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:12.896 18:24:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.156 00:13:13.156 real 0m29.711s 00:13:13.156 user 0m23.563s 00:13:13.156 sys 0m5.388s 00:13:13.156 18:24:01 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.156 18:24:01 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.156 ************************************ 00:13:13.156 END TEST nvme_xnvme 00:13:13.156 ************************************ 00:13:13.156 18:24:01 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.156 18:24:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:13.156 18:24:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.156 18:24:01 -- common/autotest_common.sh@10 -- # set +x 00:13:13.156 ************************************ 00:13:13.156 START TEST blockdev_xnvme 00:13:13.156 ************************************ 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.156 * Looking for test storage... 00:13:13.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.156 18:24:01 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:13.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.156 --rc genhtml_branch_coverage=1 00:13:13.156 --rc genhtml_function_coverage=1 00:13:13.156 --rc genhtml_legend=1 00:13:13.156 --rc geninfo_all_blocks=1 00:13:13.156 --rc geninfo_unexecuted_blocks=1 00:13:13.156 00:13:13.156 ' 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:13.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.156 --rc genhtml_branch_coverage=1 00:13:13.156 --rc genhtml_function_coverage=1 00:13:13.156 --rc genhtml_legend=1 00:13:13.156 --rc geninfo_all_blocks=1 00:13:13.156 --rc geninfo_unexecuted_blocks=1 00:13:13.156 00:13:13.156 ' 00:13:13.156 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:13.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.156 --rc genhtml_branch_coverage=1 00:13:13.156 --rc genhtml_function_coverage=1 00:13:13.156 --rc genhtml_legend=1 00:13:13.156 --rc geninfo_all_blocks=1 00:13:13.156 --rc geninfo_unexecuted_blocks=1 00:13:13.156 00:13:13.157 ' 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:13.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.157 --rc genhtml_branch_coverage=1 00:13:13.157 --rc genhtml_function_coverage=1 00:13:13.157 --rc genhtml_legend=1 00:13:13.157 --rc geninfo_all_blocks=1 00:13:13.157 --rc geninfo_unexecuted_blocks=1 00:13:13.157 00:13:13.157 ' 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82249 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82249 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 82249 ']' 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:13.157 18:24:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.157 18:24:01 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:13.418 [2024-10-08 18:24:02.065562] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:13.418 [2024-10-08 18:24:02.065961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82249 ] 00:13:13.418 [2024-10-08 18:24:02.202056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.418 [2024-10-08 18:24:02.220556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.679 [2024-10-08 18:24:02.282248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.252 18:24:02 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.252 18:24:02 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:14.252 18:24:02 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:14.252 18:24:02 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:14.252 18:24:02 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:14.252 18:24:02 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:14.252 18:24:02 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.513 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.513 Waiting for block devices as requested 00:13:14.773 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.773 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:14.773 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.069 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:20.069 nvme0n1 00:13:20.069 nvme1n1 00:13:20.069 nvme2n1 00:13:20.069 nvme2n2 00:13:20.069 nvme2n3 00:13:20.069 nvme3n1 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.069 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:20.069 18:24:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "98ee0d4b-5733-440e-ba38-22152daf8873"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "98ee0d4b-5733-440e-ba38-22152daf8873",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f44906ee-9dc4-43ff-8fcc-3e5352e78eae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f44906ee-9dc4-43ff-8fcc-3e5352e78eae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15ef23e9-05f2-4c23-8fce-07c4038d7433"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15ef23e9-05f2-4c23-8fce-07c4038d7433",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "bd0adedc-ff1f-4d3f-8144-e722eb4fb522"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd0adedc-ff1f-4d3f-8144-e722eb4fb522",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a09a311b-5d29-47eb-a196-d0b7eefdd9d3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a09a311b-5d29-47eb-a196-d0b7eefdd9d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2c6e855e-b313-4b1f-ba9c-4aee462c1824"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c6e855e-b313-4b1f-ba9c-4aee462c1824",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:20.070 18:24:08 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 82249 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 82249 ']' 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 82249 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82249 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:20.070 killing process with pid 82249 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82249' 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 82249 00:13:20.070 18:24:08 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 82249 00:13:20.330 18:24:09 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:20.330 18:24:09 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.330 18:24:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:20.330 18:24:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.330 18:24:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.330 ************************************ 00:13:20.330 START TEST bdev_hello_world 00:13:20.330 ************************************ 00:13:20.330 18:24:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.588 [2024-10-08 18:24:09.225172] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:20.588 [2024-10-08 18:24:09.225286] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82596 ] 00:13:20.589 [2024-10-08 18:24:09.354713] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:20.589 [2024-10-08 18:24:09.375125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:20.589 [2024-10-08 18:24:09.409218] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.851 [2024-10-08 18:24:09.566699] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:20.851 [2024-10-08 18:24:09.566741] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:20.851 [2024-10-08 18:24:09.566766] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:20.851 [2024-10-08 18:24:09.568261] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:20.851 [2024-10-08 18:24:09.568621] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:20.851 [2024-10-08 18:24:09.568643] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:20.851 [2024-10-08 18:24:09.568884] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:20.851 00:13:20.851 [2024-10-08 18:24:09.568926] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:21.129 00:13:21.129 real 0m0.543s 00:13:21.129 user 0m0.293s 00:13:21.129 sys 0m0.138s 00:13:21.129 18:24:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.129 ************************************ 00:13:21.129 END TEST bdev_hello_world 00:13:21.129 ************************************ 00:13:21.129 18:24:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 18:24:09 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:21.129 18:24:09 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:21.129 18:24:09 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.129 18:24:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 ************************************ 00:13:21.129 START TEST bdev_bounds 00:13:21.129 ************************************ 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82616 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:21.129 Process bdevio pid: 82616 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82616' 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82616 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82616 ']' 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:21.129 18:24:09 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:21.129 [2024-10-08 18:24:09.830916] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:21.129 [2024-10-08 18:24:09.831053] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82616 ] 00:13:21.129 [2024-10-08 18:24:09.963378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:21.406 [2024-10-08 18:24:09.976173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:21.406 [2024-10-08 18:24:10.027477] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.406 [2024-10-08 18:24:10.027886] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.406 [2024-10-08 18:24:10.027970] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.979 18:24:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.979 18:24:10 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:21.979 18:24:10 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:21.979 I/O targets: 00:13:21.979 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:21.979 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:21.979 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.979 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.979 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.979 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:21.979 00:13:21.979 00:13:21.979 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.979 http://cunit.sourceforge.net/ 00:13:21.979 00:13:21.979 00:13:21.979 Suite: bdevio tests on: nvme3n1 00:13:21.979 Test: blockdev write read block ...passed 00:13:21.979 Test: blockdev write zeroes read block ...passed 00:13:21.979 Test: blockdev write zeroes read no split ...passed 00:13:21.979 Test: blockdev write zeroes read split ...passed 00:13:21.979 Test: blockdev write zeroes read split partial ...passed 00:13:21.979 Test: blockdev reset ...passed 00:13:21.979 Test: blockdev write read 8 blocks ...passed 00:13:22.241 Test: blockdev write read size > 128k ...passed 00:13:22.241 Test: blockdev write read invalid size ...passed 00:13:22.241 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.241 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.241 Test: blockdev write read max offset ...passed 00:13:22.241 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 Suite: bdevio tests on: nvme2n3 00:13:22.242 Test: blockdev write read block ...passed 00:13:22.242 Test: blockdev write zeroes read block ...passed 00:13:22.242 Test: blockdev write zeroes read no split ...passed 00:13:22.242 Test: blockdev write zeroes read split ...passed 00:13:22.242 Test: blockdev write zeroes read split partial ...passed 00:13:22.242 Test: blockdev reset ...passed 00:13:22.242 Test: blockdev write read 8 blocks ...passed 00:13:22.242 Test: blockdev write read size > 128k ...passed 00:13:22.242 Test: blockdev write read invalid size ...passed 00:13:22.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.242 Test: blockdev write read max offset ...passed 00:13:22.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 Suite: bdevio tests on: nvme2n2 00:13:22.242 Test: blockdev write read block ...passed 00:13:22.242 Test: blockdev write zeroes read block ...passed 00:13:22.242 Test: blockdev write zeroes read no split ...passed 00:13:22.242 Test: blockdev write zeroes read split ...passed 00:13:22.242 Test: blockdev write zeroes read split partial ...passed 00:13:22.242 Test: blockdev reset ...passed 00:13:22.242 Test: blockdev write read 8 blocks ...passed 00:13:22.242 Test: blockdev write read size > 128k ...passed 00:13:22.242 Test: blockdev write read invalid size ...passed 00:13:22.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.242 Test: blockdev write read max offset ...passed 00:13:22.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 Suite: bdevio tests on: nvme2n1 00:13:22.242 Test: blockdev write read block ...passed 00:13:22.242 Test: blockdev write zeroes read block ...passed 00:13:22.242 Test: blockdev write zeroes read no split ...passed 00:13:22.242 Test: blockdev write zeroes read split ...passed 00:13:22.242 Test: blockdev write zeroes read split partial ...passed 00:13:22.242 Test: blockdev reset ...passed 00:13:22.242 Test: blockdev write read 8 blocks ...passed 00:13:22.242 Test: blockdev write read size > 128k ...passed 00:13:22.242 Test: blockdev write read invalid size ...passed 00:13:22.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.242 Test: blockdev write read max offset ...passed 00:13:22.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 Suite: bdevio tests on: nvme1n1 00:13:22.242 Test: blockdev write read block ...passed 00:13:22.242 Test: blockdev write zeroes read block ...passed 00:13:22.242 Test: blockdev write zeroes read no split ...passed 00:13:22.242 Test: blockdev write zeroes read split ...passed 00:13:22.242 Test: blockdev write zeroes read split partial ...passed 00:13:22.242 Test: blockdev reset ...passed 00:13:22.242 Test: blockdev write read 8 blocks ...passed 00:13:22.242 Test: blockdev write read size > 128k ...passed 00:13:22.242 Test: blockdev write read invalid size ...passed 00:13:22.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.242 Test: blockdev write read max offset ...passed 00:13:22.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 Suite: bdevio tests on: nvme0n1 00:13:22.242 Test: blockdev write read block ...passed 00:13:22.242 Test: blockdev write zeroes read block ...passed 00:13:22.242 Test: blockdev write zeroes read no split ...passed 00:13:22.242 Test: blockdev write zeroes read split ...passed 00:13:22.242 Test: blockdev write zeroes read split partial ...passed 00:13:22.242 Test: blockdev reset ...passed 00:13:22.242 Test: blockdev write read 8 blocks ...passed 00:13:22.242 Test: blockdev write read size > 128k ...passed 00:13:22.242 Test: blockdev write read invalid size ...passed 00:13:22.242 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.242 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.242 Test: blockdev write read max offset ...passed 00:13:22.242 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.242 Test: blockdev writev readv 8 blocks ...passed 00:13:22.242 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.242 Test: blockdev writev readv block ...passed 00:13:22.242 Test: blockdev writev readv size > 128k ...passed 00:13:22.242 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.242 Test: blockdev comparev and writev ...passed 00:13:22.242 Test: blockdev nvme passthru rw ...passed 00:13:22.242 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.242 Test: blockdev nvme admin passthru ...passed 00:13:22.242 Test: blockdev copy ...passed 00:13:22.242 00:13:22.242 Run Summary: Type Total Ran Passed Failed Inactive 00:13:22.242 suites 6 6 n/a 0 0 00:13:22.242 tests 138 138 138 0 0 00:13:22.242 asserts 780 780 780 0 n/a 00:13:22.242 00:13:22.242 Elapsed time = 0.588 seconds 00:13:22.242 0 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82616 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82616 ']' 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82616 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:22.242 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82616 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:22.504 killing process with pid 82616 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82616' 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82616 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82616 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:22.504 00:13:22.504 real 0m1.559s 00:13:22.504 user 0m3.754s 00:13:22.504 sys 0m0.337s 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:22.504 ************************************ 00:13:22.504 END TEST bdev_bounds 00:13:22.504 ************************************ 00:13:22.504 18:24:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:22.765 18:24:11 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.765 18:24:11 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:22.765 18:24:11 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.765 18:24:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.765 ************************************ 00:13:22.765 START TEST bdev_nbd 00:13:22.765 ************************************ 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82670 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82670 /var/tmp/spdk-nbd.sock 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82670 ']' 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:22.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:22.765 18:24:11 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:22.765 [2024-10-08 18:24:11.481233] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:22.765 [2024-10-08 18:24:11.481397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.026 [2024-10-08 18:24:11.619016] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:23.026 [2024-10-08 18:24:11.637603] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.026 [2024-10-08 18:24:11.672447] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.599 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.860 1+0 records in 00:13:23.860 1+0 records out 00:13:23.860 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000853124 s, 4.8 MB/s 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.860 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.120 1+0 records in 00:13:24.120 1+0 records out 00:13:24.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136863 s, 3.0 MB/s 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.120 18:24:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.382 1+0 records in 00:13:24.382 1+0 records out 00:13:24.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101255 s, 4.0 MB/s 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.382 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.643 1+0 records in 00:13:24.643 1+0 records out 00:13:24.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944112 s, 4.3 MB/s 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.643 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.644 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.644 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:24.904 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.905 1+0 records in 00:13:24.905 1+0 records out 00:13:24.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121467 s, 3.4 MB/s 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.905 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:25.165 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:25.165 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:25.165 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:25.165 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.166 1+0 records in 00:13:25.166 1+0 records out 00:13:25.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105317 s, 3.9 MB/s 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.166 18:24:13 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd0", 00:13:25.427 "bdev_name": "nvme0n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd1", 00:13:25.427 "bdev_name": "nvme1n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd2", 00:13:25.427 "bdev_name": "nvme2n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd3", 00:13:25.427 "bdev_name": "nvme2n2" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd4", 00:13:25.427 "bdev_name": "nvme2n3" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd5", 00:13:25.427 "bdev_name": "nvme3n1" 00:13:25.427 } 00:13:25.427 ]' 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd0", 00:13:25.427 "bdev_name": "nvme0n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd1", 00:13:25.427 "bdev_name": "nvme1n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd2", 00:13:25.427 "bdev_name": "nvme2n1" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd3", 00:13:25.427 "bdev_name": "nvme2n2" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd4", 00:13:25.427 "bdev_name": "nvme2n3" 00:13:25.427 }, 00:13:25.427 { 00:13:25.427 "nbd_device": "/dev/nbd5", 00:13:25.427 "bdev_name": "nvme3n1" 00:13:25.427 } 00:13:25.427 ]' 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.427 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.689 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.950 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.211 18:24:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:26.211 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:26.211 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.212 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.472 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.733 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.994 18:24:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:27.253 /dev/nbd0 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.253 1+0 records in 00:13:27.253 1+0 records out 00:13:27.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000757838 s, 5.4 MB/s 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.253 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:27.511 /dev/nbd1 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.511 1+0 records in 00:13:27.511 1+0 records out 00:13:27.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00202228 s, 2.0 MB/s 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.511 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:27.770 /dev/nbd10 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.770 1+0 records in 00:13:27.770 1+0 records out 00:13:27.770 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950952 s, 4.3 MB/s 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.770 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:28.029 /dev/nbd11 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.029 1+0 records in 00:13:28.029 1+0 records out 00:13:28.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596924 s, 6.9 MB/s 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.029 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:28.287 /dev/nbd12 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.287 1+0 records in 00:13:28.287 1+0 records out 00:13:28.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102555 s, 4.0 MB/s 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.287 18:24:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:28.546 /dev/nbd13 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.546 1+0 records in 00:13:28.546 1+0 records out 00:13:28.546 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062199 s, 6.6 MB/s 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.546 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd0", 00:13:28.807 "bdev_name": "nvme0n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd1", 00:13:28.807 "bdev_name": "nvme1n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd10", 00:13:28.807 "bdev_name": "nvme2n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd11", 00:13:28.807 "bdev_name": "nvme2n2" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd12", 00:13:28.807 "bdev_name": "nvme2n3" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd13", 00:13:28.807 "bdev_name": "nvme3n1" 00:13:28.807 } 00:13:28.807 ]' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd0", 00:13:28.807 "bdev_name": "nvme0n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd1", 00:13:28.807 "bdev_name": "nvme1n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd10", 00:13:28.807 "bdev_name": "nvme2n1" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd11", 00:13:28.807 "bdev_name": "nvme2n2" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd12", 00:13:28.807 "bdev_name": "nvme2n3" 00:13:28.807 }, 00:13:28.807 { 00:13:28.807 "nbd_device": "/dev/nbd13", 00:13:28.807 "bdev_name": "nvme3n1" 00:13:28.807 } 00:13:28.807 ]' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:28.807 /dev/nbd1 00:13:28.807 /dev/nbd10 00:13:28.807 /dev/nbd11 00:13:28.807 /dev/nbd12 00:13:28.807 /dev/nbd13' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:28.807 /dev/nbd1 00:13:28.807 /dev/nbd10 00:13:28.807 /dev/nbd11 00:13:28.807 /dev/nbd12 00:13:28.807 /dev/nbd13' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:28.807 256+0 records in 00:13:28.807 256+0 records out 00:13:28.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0090578 s, 116 MB/s 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.807 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:29.068 256+0 records in 00:13:29.068 256+0 records out 00:13:29.068 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221933 s, 4.7 MB/s 00:13:29.068 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.068 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:29.329 256+0 records in 00:13:29.329 256+0 records out 00:13:29.329 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.306126 s, 3.4 MB/s 00:13:29.329 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.329 18:24:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:29.590 256+0 records in 00:13:29.590 256+0 records out 00:13:29.590 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250442 s, 4.2 MB/s 00:13:29.590 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.590 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:29.850 256+0 records in 00:13:29.850 256+0 records out 00:13:29.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242444 s, 4.3 MB/s 00:13:29.851 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.851 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:30.112 256+0 records in 00:13:30.112 256+0 records out 00:13:30.112 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250954 s, 4.2 MB/s 00:13:30.112 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:30.112 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:30.372 256+0 records in 00:13:30.372 256+0 records out 00:13:30.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210058 s, 5.0 MB/s 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.373 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:30.634 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.634 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.634 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.634 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.634 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.635 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.635 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.635 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.635 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.635 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.896 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.157 18:24:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.418 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.679 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:31.939 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:32.200 malloc_lvol_verify 00:13:32.200 18:24:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:32.461 2170f488-752e-46ae-9a83-671cf0ca0dcf 00:13:32.461 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:32.721 9c98d45f-2442-4f24-b6b8-72ba249be415 00:13:32.721 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:32.721 /dev/nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:32.982 mke2fs 1.47.0 (5-Feb-2023) 00:13:32.982 Discarding device blocks: 0/4096 done 00:13:32.982 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:32.982 00:13:32.982 Allocating group tables: 0/1 done 00:13:32.982 Writing inode tables: 0/1 done 00:13:32.982 Creating journal (1024 blocks): done 00:13:32.982 Writing superblocks and filesystem accounting information: 0/1 done 00:13:32.982 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82670 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82670 ']' 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82670 00:13:32.982 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82670 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.242 killing process with pid 82670 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82670' 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82670 00:13:33.242 18:24:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82670 00:13:33.503 18:24:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:33.503 00:13:33.503 real 0m10.773s 00:13:33.503 user 0m14.400s 00:13:33.503 sys 0m4.020s 00:13:33.503 ************************************ 00:13:33.503 END TEST bdev_nbd 00:13:33.503 ************************************ 00:13:33.503 18:24:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.503 18:24:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.503 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:33.503 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:33.503 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:33.503 18:24:22 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:33.503 18:24:22 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.503 18:24:22 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.503 18:24:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.504 ************************************ 00:13:33.504 START TEST bdev_fio 00:13:33.504 ************************************ 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:33.504 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:33.504 ************************************ 00:13:33.504 START TEST bdev_fio_rw_verify 00:13:33.504 ************************************ 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:33.504 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:33.765 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:33.765 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:33.765 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:33.765 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:33.765 18:24:22 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:33.765 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:33.765 fio-3.35 00:13:33.765 Starting 6 threads 00:13:46.058 00:13:46.058 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83071: Tue Oct 8 18:24:33 2024 00:13:46.058 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10002msec) 00:13:46.058 slat (usec): min=2, max=2594, avg= 7.64, stdev=17.31 00:13:46.058 clat (usec): min=95, max=8175, avg=1785.38, stdev=853.13 00:13:46.058 lat (usec): min=101, max=8178, avg=1793.02, stdev=853.67 00:13:46.058 clat percentiles (usec): 00:13:46.058 | 50.000th=[ 1680], 99.000th=[ 4424], 99.900th=[ 5800], 99.990th=[ 7767], 00:13:46.058 | 99.999th=[ 8160] 00:13:46.058 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(468MiB/10002msec); 0 zone resets 00:13:46.058 slat (usec): min=4, max=3905, avg=42.73, stdev=149.43 00:13:46.058 clat (usec): min=136, max=10654, avg=1946.74, stdev=882.71 00:13:46.058 lat (usec): min=152, max=10722, avg=1989.47, stdev=894.42 00:13:46.058 clat percentiles (usec): 00:13:46.058 | 50.000th=[ 1811], 99.000th=[ 4686], 99.900th=[ 5997], 99.990th=[ 8717], 00:13:46.058 | 99.999th=[10552] 00:13:46.058 bw ( KiB/s): min=42159, max=49488, per=99.70%, avg=47764.16, stdev=443.23, samples=114 00:13:46.058 iops : min=10538, max=12372, avg=11940.16, stdev=110.78, samples=114 00:13:46.058 lat (usec) : 100=0.01%, 250=0.24%, 500=2.16%, 750=4.28%, 1000=7.06% 00:13:46.058 lat (msec) : 2=49.26%, 4=34.69%, 10=2.32%, 20=0.01% 00:13:46.058 cpu : usr=48.11%, sys=31.33%, ctx=4632, majf=0, minf=12839 00:13:46.058 IO depths : 1=11.5%, 2=23.9%, 4=51.1%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:46.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.058 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:46.058 issued rwts: total=115250,119798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:46.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:46.058 00:13:46.058 Run status group 0 (all jobs): 00:13:46.058 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10002-10002msec 00:13:46.058 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=468MiB (491MB), run=10002-10002msec 00:13:46.058 ----------------------------------------------------- 00:13:46.058 Suppressions used: 00:13:46.058 count bytes template 00:13:46.058 6 48 /usr/src/fio/parse.c 00:13:46.058 4472 429312 /usr/src/fio/iolog.c 00:13:46.058 1 8 libtcmalloc_minimal.so 00:13:46.058 1 904 libcrypto.so 00:13:46.058 ----------------------------------------------------- 00:13:46.058 00:13:46.058 00:13:46.058 real 0m11.299s 00:13:46.058 user 0m29.670s 00:13:46.058 sys 0m19.195s 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 ************************************ 00:13:46.058 END TEST bdev_fio_rw_verify 00:13:46.058 ************************************ 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "98ee0d4b-5733-440e-ba38-22152daf8873"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "98ee0d4b-5733-440e-ba38-22152daf8873",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "f44906ee-9dc4-43ff-8fcc-3e5352e78eae"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f44906ee-9dc4-43ff-8fcc-3e5352e78eae",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "15ef23e9-05f2-4c23-8fce-07c4038d7433"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "15ef23e9-05f2-4c23-8fce-07c4038d7433",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "bd0adedc-ff1f-4d3f-8144-e722eb4fb522"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bd0adedc-ff1f-4d3f-8144-e722eb4fb522",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a09a311b-5d29-47eb-a196-d0b7eefdd9d3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a09a311b-5d29-47eb-a196-d0b7eefdd9d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "2c6e855e-b313-4b1f-ba9c-4aee462c1824"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "2c6e855e-b313-4b1f-ba9c-4aee462c1824",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:46.058 /home/vagrant/spdk_repo/spdk 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:46.058 00:13:46.058 real 0m11.489s 00:13:46.058 user 0m29.742s 00:13:46.058 sys 0m19.289s 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:46.058 ************************************ 00:13:46.058 END TEST bdev_fio 00:13:46.058 ************************************ 00:13:46.058 18:24:33 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 18:24:33 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:46.058 18:24:33 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:46.058 18:24:33 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:46.058 18:24:33 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:46.058 18:24:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:46.058 ************************************ 00:13:46.058 START TEST bdev_verify 00:13:46.058 ************************************ 00:13:46.058 18:24:33 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:46.058 [2024-10-08 18:24:33.882638] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:46.059 [2024-10-08 18:24:33.882809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83243 ] 00:13:46.059 [2024-10-08 18:24:34.019166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:46.059 [2024-10-08 18:24:34.037349] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:46.059 [2024-10-08 18:24:34.110200] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.059 [2024-10-08 18:24:34.110260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:46.059 Running I/O for 5 seconds... 00:13:47.941 24384.00 IOPS, 95.25 MiB/s [2024-10-08T18:24:37.734Z] 23808.00 IOPS, 93.00 MiB/s [2024-10-08T18:24:38.677Z] 23189.33 IOPS, 90.58 MiB/s [2024-10-08T18:24:39.622Z] 23344.00 IOPS, 91.19 MiB/s [2024-10-08T18:24:39.622Z] 23449.60 IOPS, 91.60 MiB/s 00:13:50.772 Latency(us) 00:13:50.772 [2024-10-08T18:24:39.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.772 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0xa0000 00:13:50.772 nvme0n1 : 5.07 1717.40 6.71 0.00 0.00 74396.04 8015.56 75820.11 00:13:50.772 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0xa0000 length 0xa0000 00:13:50.772 nvme0n1 : 5.05 1950.17 7.62 0.00 0.00 65510.44 9931.22 63317.86 00:13:50.772 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0xbd0bd 00:13:50.772 nvme1n1 : 5.06 2263.48 8.84 0.00 0.00 56014.56 5797.42 70173.93 00:13:50.772 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:50.772 nvme1n1 : 5.06 2514.80 9.82 0.00 0.00 50666.61 7108.14 53235.40 00:13:50.772 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0x80000 00:13:50.772 nvme2n1 : 5.07 1715.23 6.70 0.00 0.00 74056.12 9931.22 93565.24 00:13:50.772 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x80000 length 0x80000 00:13:50.772 nvme2n1 : 5.06 2000.09 7.81 0.00 0.00 63519.82 5343.70 60091.47 00:13:50.772 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0x80000 00:13:50.772 nvme2n2 : 5.08 1713.48 6.69 0.00 0.00 73947.93 10334.52 93565.24 00:13:50.772 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x80000 length 0x80000 00:13:50.772 nvme2n2 : 5.06 1948.90 7.61 0.00 0.00 65078.82 8519.68 61301.37 00:13:50.772 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0x80000 00:13:50.772 nvme2n3 : 5.09 1711.61 6.69 0.00 0.00 73852.87 8922.98 67754.14 00:13:50.772 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x80000 length 0x80000 00:13:50.772 nvme2n3 : 5.07 1944.55 7.60 0.00 0.00 65117.46 9326.28 60494.77 00:13:50.772 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:50.772 Verification LBA range: start 0x0 length 0x20000 00:13:50.772 nvme3n1 : 5.09 1710.18 6.68 0.00 0.00 73818.85 6604.01 70173.93 00:13:50.773 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:50.773 Verification LBA range: start 0x20000 length 0x20000 00:13:50.773 nvme3n1 : 5.08 1966.36 7.68 0.00 0.00 64287.66 4259.84 63317.86 00:13:50.773 [2024-10-08T18:24:39.623Z] =================================================================================================================== 00:13:50.773 [2024-10-08T18:24:39.623Z] Total : 23156.25 90.45 0.00 0.00 65781.28 4259.84 93565.24 00:13:51.034 00:13:51.034 real 0m5.917s 00:13:51.034 user 0m9.323s 00:13:51.034 sys 0m1.564s 00:13:51.034 18:24:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.034 ************************************ 00:13:51.035 END TEST bdev_verify 00:13:51.035 18:24:39 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:51.035 ************************************ 00:13:51.035 18:24:39 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:51.035 18:24:39 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:51.035 18:24:39 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.035 18:24:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.035 ************************************ 00:13:51.035 START TEST bdev_verify_big_io 00:13:51.035 ************************************ 00:13:51.035 18:24:39 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:51.035 [2024-10-08 18:24:39.867850] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:51.035 [2024-10-08 18:24:39.867978] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83333 ] 00:13:51.296 [2024-10-08 18:24:40.001854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:51.296 [2024-10-08 18:24:40.022232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.296 [2024-10-08 18:24:40.077401] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.296 [2024-10-08 18:24:40.077496] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.557 Running I/O for 5 seconds... 00:13:57.677 2240.00 IOPS, 140.00 MiB/s [2024-10-08T18:24:46.789Z] 2672.00 IOPS, 167.00 MiB/s [2024-10-08T18:24:46.789Z] 2951.00 IOPS, 184.44 MiB/s 00:13:57.939 Latency(us) 00:13:57.939 [2024-10-08T18:24:46.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.939 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0xa000 00:13:57.939 nvme0n1 : 5.89 76.06 4.75 0.00 0.00 1565500.20 229073.53 2000360.37 00:13:57.939 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0xa000 length 0xa000 00:13:57.939 nvme0n1 : 5.88 108.80 6.80 0.00 0.00 1127812.80 324251.96 955010.76 00:13:57.939 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0xbd0b 00:13:57.939 nvme1n1 : 6.00 101.38 6.34 0.00 0.00 1140205.68 67754.14 2051982.57 00:13:57.939 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:57.939 nvme1n1 : 5.89 162.89 10.18 0.00 0.00 730685.97 38918.30 858219.13 00:13:57.939 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0x8000 00:13:57.939 nvme2n1 : 6.02 116.93 7.31 0.00 0.00 944288.40 38716.65 1038896.84 00:13:57.939 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x8000 length 0x8000 00:13:57.939 nvme2n1 : 5.91 135.46 8.47 0.00 0.00 877960.02 21576.47 1187310.67 00:13:57.939 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0x8000 00:13:57.939 nvme2n2 : 6.04 106.29 6.64 0.00 0.00 998343.35 31053.98 1174405.12 00:13:57.939 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x8000 length 0x8000 00:13:57.939 nvme2n2 : 5.90 130.15 8.13 0.00 0.00 889676.87 15728.64 1393799.48 00:13:57.939 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0x8000 00:13:57.939 nvme2n3 : 6.16 137.71 8.61 0.00 0.00 739345.72 29642.44 1122782.92 00:13:57.939 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x8000 length 0x8000 00:13:57.939 nvme2n3 : 5.90 119.25 7.45 0.00 0.00 947659.58 15728.64 1922927.06 00:13:57.939 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x0 length 0x2000 00:13:57.939 nvme3n1 : 6.38 200.95 12.56 0.00 0.00 487306.64 2268.55 3381254.30 00:13:57.939 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.939 Verification LBA range: start 0x2000 length 0x2000 00:13:57.939 nvme3n1 : 5.89 184.65 11.54 0.00 0.00 595477.29 12098.95 864671.90 00:13:57.939 [2024-10-08T18:24:46.789Z] =================================================================================================================== 00:13:57.939 [2024-10-08T18:24:46.789Z] Total : 1580.51 98.78 0.00 0.00 850775.77 2268.55 3381254.30 00:13:58.201 00:13:58.201 real 0m7.201s 00:13:58.201 user 0m13.193s 00:13:58.201 sys 0m0.479s 00:13:58.201 18:24:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.201 18:24:46 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.201 ************************************ 00:13:58.201 END TEST bdev_verify_big_io 00:13:58.201 ************************************ 00:13:58.462 18:24:47 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.462 18:24:47 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:13:58.462 18:24:47 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.462 18:24:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.462 ************************************ 00:13:58.462 START TEST bdev_write_zeroes 00:13:58.462 ************************************ 00:13:58.462 18:24:47 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.462 [2024-10-08 18:24:47.136705] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:13:58.462 [2024-10-08 18:24:47.136878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83440 ] 00:13:58.462 [2024-10-08 18:24:47.272026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:58.462 [2024-10-08 18:24:47.291536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.723 [2024-10-08 18:24:47.341212] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.723 Running I/O for 1 seconds... 00:14:00.109 101088.00 IOPS, 394.88 MiB/s 00:14:00.109 Latency(us) 00:14:00.109 [2024-10-08T18:24:48.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.109 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme0n1 : 1.01 16678.61 65.15 0.00 0.00 7664.75 5999.06 21072.34 00:14:00.109 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme1n1 : 1.02 17161.76 67.04 0.00 0.00 7443.63 5999.06 16434.41 00:14:00.109 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme2n1 : 1.03 16589.83 64.80 0.00 0.00 7643.43 4209.43 22786.36 00:14:00.109 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme2n2 : 1.03 16570.39 64.73 0.00 0.00 7646.98 4209.43 22786.36 00:14:00.109 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme2n3 : 1.03 16549.88 64.65 0.00 0.00 7649.61 4234.63 22786.36 00:14:00.109 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:00.109 nvme3n1 : 1.02 16506.75 64.48 0.00 0.00 7663.40 4335.46 22786.36 00:14:00.109 [2024-10-08T18:24:48.959Z] =================================================================================================================== 00:14:00.109 [2024-10-08T18:24:48.959Z] Total : 100057.22 390.85 0.00 0.00 7617.70 4209.43 22786.36 00:14:00.109 00:14:00.109 real 0m1.758s 00:14:00.109 user 0m1.085s 00:14:00.109 sys 0m0.489s 00:14:00.109 18:24:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.109 18:24:48 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:00.109 ************************************ 00:14:00.109 END TEST bdev_write_zeroes 00:14:00.109 ************************************ 00:14:00.109 18:24:48 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.109 18:24:48 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:00.109 18:24:48 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.109 18:24:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.109 ************************************ 00:14:00.109 START TEST bdev_json_nonenclosed 00:14:00.109 ************************************ 00:14:00.109 18:24:48 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.369 [2024-10-08 18:24:48.974906] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:00.369 [2024-10-08 18:24:48.975065] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83476 ] 00:14:00.369 [2024-10-08 18:24:49.111700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.369 [2024-10-08 18:24:49.131531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.369 [2024-10-08 18:24:49.181158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.370 [2024-10-08 18:24:49.181267] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:00.370 [2024-10-08 18:24:49.181287] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:00.370 [2024-10-08 18:24:49.181298] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:00.631 00:14:00.631 real 0m0.393s 00:14:00.631 user 0m0.157s 00:14:00.631 sys 0m0.130s 00:14:00.631 18:24:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.631 ************************************ 00:14:00.631 18:24:49 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:00.631 END TEST bdev_json_nonenclosed 00:14:00.631 ************************************ 00:14:00.631 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.631 18:24:49 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:00.631 18:24:49 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.631 18:24:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.631 ************************************ 00:14:00.631 START TEST bdev_json_nonarray 00:14:00.631 ************************************ 00:14:00.631 18:24:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.631 [2024-10-08 18:24:49.432096] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:00.631 [2024-10-08 18:24:49.432244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83502 ] 00:14:00.893 [2024-10-08 18:24:49.568941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:00.893 [2024-10-08 18:24:49.590036] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.893 [2024-10-08 18:24:49.641238] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.893 [2024-10-08 18:24:49.641352] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:00.893 [2024-10-08 18:24:49.641375] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:00.893 [2024-10-08 18:24:49.641387] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.154 00:14:01.154 real 0m0.393s 00:14:01.154 user 0m0.172s 00:14:01.154 sys 0m0.114s 00:14:01.154 18:24:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:01.154 ************************************ 00:14:01.154 END TEST bdev_json_nonarray 00:14:01.154 ************************************ 00:14:01.154 18:24:49 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:01.154 18:24:49 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:01.727 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:08.310 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:08.310 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.227 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.227 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:10.227 00:14:10.227 real 0m57.219s 00:14:10.227 user 1m20.406s 00:14:10.227 sys 0m40.521s 00:14:10.227 18:24:59 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.227 ************************************ 00:14:10.227 END TEST blockdev_xnvme 00:14:10.227 ************************************ 00:14:10.227 18:24:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.488 18:24:59 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:10.488 18:24:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.488 18:24:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.488 18:24:59 -- common/autotest_common.sh@10 -- # set +x 00:14:10.488 ************************************ 00:14:10.488 START TEST ublk 00:14:10.488 ************************************ 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:10.488 * Looking for test storage... 00:14:10.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.488 18:24:59 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.488 18:24:59 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.488 18:24:59 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.488 18:24:59 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.488 18:24:59 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.488 18:24:59 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:10.488 18:24:59 ublk -- scripts/common.sh@345 -- # : 1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.488 18:24:59 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.488 18:24:59 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@353 -- # local d=1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.488 18:24:59 ublk -- scripts/common.sh@355 -- # echo 1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.488 18:24:59 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@353 -- # local d=2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.488 18:24:59 ublk -- scripts/common.sh@355 -- # echo 2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.488 18:24:59 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.488 18:24:59 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.488 18:24:59 ublk -- scripts/common.sh@368 -- # return 0 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.488 --rc genhtml_branch_coverage=1 00:14:10.488 --rc genhtml_function_coverage=1 00:14:10.488 --rc genhtml_legend=1 00:14:10.488 --rc geninfo_all_blocks=1 00:14:10.488 --rc geninfo_unexecuted_blocks=1 00:14:10.488 00:14:10.488 ' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.488 --rc genhtml_branch_coverage=1 00:14:10.488 --rc genhtml_function_coverage=1 00:14:10.488 --rc genhtml_legend=1 00:14:10.488 --rc geninfo_all_blocks=1 00:14:10.488 --rc geninfo_unexecuted_blocks=1 00:14:10.488 00:14:10.488 ' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.488 --rc genhtml_branch_coverage=1 00:14:10.488 --rc genhtml_function_coverage=1 00:14:10.488 --rc genhtml_legend=1 00:14:10.488 --rc geninfo_all_blocks=1 00:14:10.488 --rc geninfo_unexecuted_blocks=1 00:14:10.488 00:14:10.488 ' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:10.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.488 --rc genhtml_branch_coverage=1 00:14:10.488 --rc genhtml_function_coverage=1 00:14:10.488 --rc genhtml_legend=1 00:14:10.488 --rc geninfo_all_blocks=1 00:14:10.488 --rc geninfo_unexecuted_blocks=1 00:14:10.488 00:14:10.488 ' 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:10.488 18:24:59 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:10.488 18:24:59 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:10.488 18:24:59 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:10.488 18:24:59 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:10.488 18:24:59 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:10.488 18:24:59 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:10.488 18:24:59 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:10.488 18:24:59 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:10.488 18:24:59 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.488 18:24:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:10.488 ************************************ 00:14:10.488 START TEST test_save_ublk_config 00:14:10.489 ************************************ 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83801 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83801 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83801 ']' 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:10.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:10.489 18:24:59 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:10.780 [2024-10-08 18:24:59.392010] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:10.780 [2024-10-08 18:24:59.392173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83801 ] 00:14:10.780 [2024-10-08 18:24:59.529286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:10.780 [2024-10-08 18:24:59.546945] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.064 [2024-10-08 18:24:59.629744] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.636 [2024-10-08 18:25:00.196778] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:11.636 [2024-10-08 18:25:00.197097] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:11.636 malloc0 00:14:11.636 [2024-10-08 18:25:00.228894] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:11.636 [2024-10-08 18:25:00.228978] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:11.636 [2024-10-08 18:25:00.228992] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:11.636 [2024-10-08 18:25:00.229007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:11.636 [2024-10-08 18:25:00.237864] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:11.636 [2024-10-08 18:25:00.237890] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:11.636 [2024-10-08 18:25:00.244777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:11.636 [2024-10-08 18:25:00.244871] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:11.636 [2024-10-08 18:25:00.261777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:11.636 0 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.636 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:11.898 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.898 18:25:00 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:11.898 "subsystems": [ 00:14:11.898 { 00:14:11.898 "subsystem": "fsdev", 00:14:11.898 "config": [ 00:14:11.898 { 00:14:11.898 "method": "fsdev_set_opts", 00:14:11.898 "params": { 00:14:11.898 "fsdev_io_pool_size": 65535, 00:14:11.898 "fsdev_io_cache_size": 256 00:14:11.898 } 00:14:11.898 } 00:14:11.898 ] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "keyring", 00:14:11.898 "config": [] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "iobuf", 00:14:11.898 "config": [ 00:14:11.898 { 00:14:11.898 "method": "iobuf_set_options", 00:14:11.898 "params": { 00:14:11.898 "small_pool_count": 8192, 00:14:11.898 "large_pool_count": 1024, 00:14:11.898 "small_bufsize": 8192, 00:14:11.898 "large_bufsize": 135168 00:14:11.898 } 00:14:11.898 } 00:14:11.898 ] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "sock", 00:14:11.898 "config": [ 00:14:11.898 { 00:14:11.898 "method": "sock_set_default_impl", 00:14:11.898 "params": { 00:14:11.898 "impl_name": "posix" 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "sock_impl_set_options", 00:14:11.898 "params": { 00:14:11.898 "impl_name": "ssl", 00:14:11.898 "recv_buf_size": 4096, 00:14:11.898 "send_buf_size": 4096, 00:14:11.898 "enable_recv_pipe": true, 00:14:11.898 "enable_quickack": false, 00:14:11.898 "enable_placement_id": 0, 00:14:11.898 "enable_zerocopy_send_server": true, 00:14:11.898 "enable_zerocopy_send_client": false, 00:14:11.898 "zerocopy_threshold": 0, 00:14:11.898 "tls_version": 0, 00:14:11.898 "enable_ktls": false 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "sock_impl_set_options", 00:14:11.898 "params": { 00:14:11.898 "impl_name": "posix", 00:14:11.898 "recv_buf_size": 2097152, 00:14:11.898 "send_buf_size": 2097152, 00:14:11.898 "enable_recv_pipe": true, 00:14:11.898 "enable_quickack": false, 00:14:11.898 "enable_placement_id": 0, 00:14:11.898 "enable_zerocopy_send_server": true, 00:14:11.898 "enable_zerocopy_send_client": false, 00:14:11.898 "zerocopy_threshold": 0, 00:14:11.898 "tls_version": 0, 00:14:11.898 "enable_ktls": false 00:14:11.898 } 00:14:11.898 } 00:14:11.898 ] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "vmd", 00:14:11.898 "config": [] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "accel", 00:14:11.898 "config": [ 00:14:11.898 { 00:14:11.898 "method": "accel_set_options", 00:14:11.898 "params": { 00:14:11.898 "small_cache_size": 128, 00:14:11.898 "large_cache_size": 16, 00:14:11.898 "task_count": 2048, 00:14:11.898 "sequence_count": 2048, 00:14:11.898 "buf_count": 2048 00:14:11.898 } 00:14:11.898 } 00:14:11.898 ] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "bdev", 00:14:11.898 "config": [ 00:14:11.898 { 00:14:11.898 "method": "bdev_set_options", 00:14:11.898 "params": { 00:14:11.898 "bdev_io_pool_size": 65535, 00:14:11.898 "bdev_io_cache_size": 256, 00:14:11.898 "bdev_auto_examine": true, 00:14:11.898 "iobuf_small_cache_size": 128, 00:14:11.898 "iobuf_large_cache_size": 16 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_raid_set_options", 00:14:11.898 "params": { 00:14:11.898 "process_window_size_kb": 1024, 00:14:11.898 "process_max_bandwidth_mb_sec": 0 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_iscsi_set_options", 00:14:11.898 "params": { 00:14:11.898 "timeout_sec": 30 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_nvme_set_options", 00:14:11.898 "params": { 00:14:11.898 "action_on_timeout": "none", 00:14:11.898 "timeout_us": 0, 00:14:11.898 "timeout_admin_us": 0, 00:14:11.898 "keep_alive_timeout_ms": 10000, 00:14:11.898 "arbitration_burst": 0, 00:14:11.898 "low_priority_weight": 0, 00:14:11.898 "medium_priority_weight": 0, 00:14:11.898 "high_priority_weight": 0, 00:14:11.898 "nvme_adminq_poll_period_us": 10000, 00:14:11.898 "nvme_ioq_poll_period_us": 0, 00:14:11.898 "io_queue_requests": 0, 00:14:11.898 "delay_cmd_submit": true, 00:14:11.898 "transport_retry_count": 4, 00:14:11.898 "bdev_retry_count": 3, 00:14:11.898 "transport_ack_timeout": 0, 00:14:11.898 "ctrlr_loss_timeout_sec": 0, 00:14:11.898 "reconnect_delay_sec": 0, 00:14:11.898 "fast_io_fail_timeout_sec": 0, 00:14:11.898 "disable_auto_failback": false, 00:14:11.898 "generate_uuids": false, 00:14:11.898 "transport_tos": 0, 00:14:11.898 "nvme_error_stat": false, 00:14:11.898 "rdma_srq_size": 0, 00:14:11.898 "io_path_stat": false, 00:14:11.898 "allow_accel_sequence": false, 00:14:11.898 "rdma_max_cq_size": 0, 00:14:11.898 "rdma_cm_event_timeout_ms": 0, 00:14:11.898 "dhchap_digests": [ 00:14:11.898 "sha256", 00:14:11.898 "sha384", 00:14:11.898 "sha512" 00:14:11.898 ], 00:14:11.898 "dhchap_dhgroups": [ 00:14:11.898 "null", 00:14:11.898 "ffdhe2048", 00:14:11.898 "ffdhe3072", 00:14:11.898 "ffdhe4096", 00:14:11.898 "ffdhe6144", 00:14:11.898 "ffdhe8192" 00:14:11.898 ] 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_nvme_set_hotplug", 00:14:11.898 "params": { 00:14:11.898 "period_us": 100000, 00:14:11.898 "enable": false 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_malloc_create", 00:14:11.898 "params": { 00:14:11.898 "name": "malloc0", 00:14:11.898 "num_blocks": 8192, 00:14:11.898 "block_size": 4096, 00:14:11.898 "physical_block_size": 4096, 00:14:11.898 "uuid": "2a7cb6eb-ed97-45e3-9ae0-f4b722362b94", 00:14:11.898 "optimal_io_boundary": 0, 00:14:11.898 "md_size": 0, 00:14:11.898 "dif_type": 0, 00:14:11.898 "dif_is_head_of_md": false, 00:14:11.898 "dif_pi_format": 0 00:14:11.898 } 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "method": "bdev_wait_for_examine" 00:14:11.898 } 00:14:11.898 ] 00:14:11.898 }, 00:14:11.898 { 00:14:11.898 "subsystem": "scsi", 00:14:11.898 "config": null 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "scheduler", 00:14:11.899 "config": [ 00:14:11.899 { 00:14:11.899 "method": "framework_set_scheduler", 00:14:11.899 "params": { 00:14:11.899 "name": "static" 00:14:11.899 } 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "vhost_scsi", 00:14:11.899 "config": [] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "vhost_blk", 00:14:11.899 "config": [] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "ublk", 00:14:11.899 "config": [ 00:14:11.899 { 00:14:11.899 "method": "ublk_create_target", 00:14:11.899 "params": { 00:14:11.899 "cpumask": "1" 00:14:11.899 } 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "method": "ublk_start_disk", 00:14:11.899 "params": { 00:14:11.899 "bdev_name": "malloc0", 00:14:11.899 "ublk_id": 0, 00:14:11.899 "num_queues": 1, 00:14:11.899 "queue_depth": 128 00:14:11.899 } 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "nbd", 00:14:11.899 "config": [] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "nvmf", 00:14:11.899 "config": [ 00:14:11.899 { 00:14:11.899 "method": "nvmf_set_config", 00:14:11.899 "params": { 00:14:11.899 "discovery_filter": "match_any", 00:14:11.899 "admin_cmd_passthru": { 00:14:11.899 "identify_ctrlr": false 00:14:11.899 }, 00:14:11.899 "dhchap_digests": [ 00:14:11.899 "sha256", 00:14:11.899 "sha384", 00:14:11.899 "sha512" 00:14:11.899 ], 00:14:11.899 "dhchap_dhgroups": [ 00:14:11.899 "null", 00:14:11.899 "ffdhe2048", 00:14:11.899 "ffdhe3072", 00:14:11.899 "ffdhe4096", 00:14:11.899 "ffdhe6144", 00:14:11.899 "ffdhe8192" 00:14:11.899 ] 00:14:11.899 } 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "method": "nvmf_set_max_subsystems", 00:14:11.899 "params": { 00:14:11.899 "max_subsystems": 1024 00:14:11.899 } 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "method": "nvmf_set_crdt", 00:14:11.899 "params": { 00:14:11.899 "crdt1": 0, 00:14:11.899 "crdt2": 0, 00:14:11.899 "crdt3": 0 00:14:11.899 } 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 }, 00:14:11.899 { 00:14:11.899 "subsystem": "iscsi", 00:14:11.899 "config": [ 00:14:11.899 { 00:14:11.899 "method": "iscsi_set_options", 00:14:11.899 "params": { 00:14:11.899 "node_base": "iqn.2016-06.io.spdk", 00:14:11.899 "max_sessions": 128, 00:14:11.899 "max_connections_per_session": 2, 00:14:11.899 "max_queue_depth": 64, 00:14:11.899 "default_time2wait": 2, 00:14:11.899 "default_time2retain": 20, 00:14:11.899 "first_burst_length": 8192, 00:14:11.899 "immediate_data": true, 00:14:11.899 "allow_duplicated_isid": false, 00:14:11.899 "error_recovery_level": 0, 00:14:11.899 "nop_timeout": 60, 00:14:11.899 "nop_in_interval": 30, 00:14:11.899 "disable_chap": false, 00:14:11.899 "require_chap": false, 00:14:11.899 "mutual_chap": false, 00:14:11.899 "chap_group": 0, 00:14:11.899 "max_large_datain_per_connection": 64, 00:14:11.899 "max_r2t_per_connection": 4, 00:14:11.899 "pdu_pool_size": 36864, 00:14:11.899 "immediate_data_pool_size": 16384, 00:14:11.899 "data_out_pool_size": 2048 00:14:11.899 } 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 } 00:14:11.899 ] 00:14:11.899 }' 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83801 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83801 ']' 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83801 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83801 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.899 killing process with pid 83801 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83801' 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83801 00:14:11.899 18:25:00 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83801 00:14:12.160 [2024-10-08 18:25:00.972454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:12.420 [2024-10-08 18:25:01.013892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:12.420 [2024-10-08 18:25:01.014079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:12.420 [2024-10-08 18:25:01.017786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:12.420 [2024-10-08 18:25:01.017852] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:12.420 [2024-10-08 18:25:01.017866] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:12.420 [2024-10-08 18:25:01.017899] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:12.420 [2024-10-08 18:25:01.018077] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83839 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83839 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83839 ']' 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:12.993 18:25:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:12.993 "subsystems": [ 00:14:12.993 { 00:14:12.993 "subsystem": "fsdev", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "fsdev_set_opts", 00:14:12.993 "params": { 00:14:12.993 "fsdev_io_pool_size": 65535, 00:14:12.993 "fsdev_io_cache_size": 256 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "keyring", 00:14:12.993 "config": [] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "iobuf", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "iobuf_set_options", 00:14:12.993 "params": { 00:14:12.993 "small_pool_count": 8192, 00:14:12.993 "large_pool_count": 1024, 00:14:12.993 "small_bufsize": 8192, 00:14:12.993 "large_bufsize": 135168 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "sock", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "sock_set_default_impl", 00:14:12.993 "params": { 00:14:12.993 "impl_name": "posix" 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "sock_impl_set_options", 00:14:12.993 "params": { 00:14:12.993 "impl_name": "ssl", 00:14:12.993 "recv_buf_size": 4096, 00:14:12.993 "send_buf_size": 4096, 00:14:12.993 "enable_recv_pipe": true, 00:14:12.993 "enable_quickack": false, 00:14:12.993 "enable_placement_id": 0, 00:14:12.993 "enable_zerocopy_send_server": true, 00:14:12.993 "enable_zerocopy_send_client": false, 00:14:12.993 "zerocopy_threshold": 0, 00:14:12.993 "tls_version": 0, 00:14:12.993 "enable_ktls": false 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "sock_impl_set_options", 00:14:12.993 "params": { 00:14:12.993 "impl_name": "posix", 00:14:12.993 "recv_buf_size": 2097152, 00:14:12.993 "send_buf_size": 2097152, 00:14:12.993 "enable_recv_pipe": true, 00:14:12.993 "enable_quickack": false, 00:14:12.993 "enable_placement_id": 0, 00:14:12.993 "enable_zerocopy_send_server": true, 00:14:12.993 "enable_zerocopy_send_client": false, 00:14:12.993 "zerocopy_threshold": 0, 00:14:12.993 "tls_version": 0, 00:14:12.993 "enable_ktls": false 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "vmd", 00:14:12.993 "config": [] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "accel", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "accel_set_options", 00:14:12.993 "params": { 00:14:12.993 "small_cache_size": 128, 00:14:12.993 "large_cache_size": 16, 00:14:12.993 "task_count": 2048, 00:14:12.993 "sequence_count": 2048, 00:14:12.993 "buf_count": 2048 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "bdev", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "bdev_set_options", 00:14:12.993 "params": { 00:14:12.993 "bdev_io_pool_size": 65535, 00:14:12.993 "bdev_io_cache_size": 256, 00:14:12.993 "bdev_auto_examine": true, 00:14:12.993 "iobuf_small_cache_size": 128, 00:14:12.993 "iobuf_large_cache_size": 16 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_raid_set_options", 00:14:12.993 "params": { 00:14:12.993 "process_window_size_kb": 1024, 00:14:12.993 "process_max_bandwidth_mb_sec": 0 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_iscsi_set_options", 00:14:12.993 "params": { 00:14:12.993 "timeout_sec": 30 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_nvme_set_options", 00:14:12.993 "params": { 00:14:12.993 "action_on_timeout": "none", 00:14:12.993 "timeout_us": 0, 00:14:12.993 "timeout_admin_us": 0, 00:14:12.993 "keep_alive_timeout_ms": 10000, 00:14:12.993 "arbitration_burst": 0, 00:14:12.993 "low_priority_weight": 0, 00:14:12.993 "medium_priority_weight": 0, 00:14:12.993 "high_priority_weight": 0, 00:14:12.993 "nvme_adminq_poll_period_us": 10000, 00:14:12.993 "nvme_ioq_poll_period_us": 0, 00:14:12.993 "io_queue_requests": 0, 00:14:12.993 "delay_cmd_submit": true, 00:14:12.993 "transport_retry_count": 4, 00:14:12.993 "bdev_retry_count": 3, 00:14:12.993 "transport_ack_timeout": 0, 00:14:12.993 "ctrlr_loss_timeout_sec": 0, 00:14:12.993 "reconnect_delay_sec": 0, 00:14:12.993 "fast_io_fail_timeout_sec": 0, 00:14:12.993 "disable_auto_failback": false, 00:14:12.993 "generate_uuids": false, 00:14:12.993 "transport_tos": 0, 00:14:12.993 "nvme_error_stat": false, 00:14:12.993 "rdma_srq_size": 0, 00:14:12.993 "io_path_stat": false, 00:14:12.993 "allow_accel_sequence": false, 00:14:12.993 "rdma_max_cq_size": 0, 00:14:12.993 "rdma_cm_event_timeout_ms": 0, 00:14:12.993 "dhchap_digests": [ 00:14:12.993 "sha256", 00:14:12.993 "sha384", 00:14:12.993 "sha512" 00:14:12.993 ], 00:14:12.993 "dhchap_dhgroups": [ 00:14:12.993 "null", 00:14:12.993 "ffdhe2048", 00:14:12.993 "ffdhe3072", 00:14:12.993 "ffdhe4096", 00:14:12.993 "ffdhe6144", 00:14:12.993 "ffdhe8192" 00:14:12.993 ] 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_nvme_set_hotplug", 00:14:12.993 "params": { 00:14:12.993 "period_us": 100000, 00:14:12.993 "enable": false 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_malloc_create", 00:14:12.993 "params": { 00:14:12.993 "name": "malloc0", 00:14:12.993 "num_blocks": 8192, 00:14:12.993 "block_size": 4096, 00:14:12.993 "physical_block_size": 4096, 00:14:12.993 "uuid": "2a7cb6eb-ed97-45e3-9ae0-f4b722362b94", 00:14:12.993 "optimal_io_boundary": 0, 00:14:12.993 "md_size": 0, 00:14:12.993 "dif_type": 0, 00:14:12.993 "dif_is_head_of_md": false, 00:14:12.993 "dif_pi_format": 0 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "bdev_wait_for_examine" 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "scsi", 00:14:12.993 "config": null 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "scheduler", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "framework_set_scheduler", 00:14:12.993 "params": { 00:14:12.993 "name": "static" 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "vhost_scsi", 00:14:12.993 "config": [] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "vhost_blk", 00:14:12.993 "config": [] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "ublk", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "ublk_create_target", 00:14:12.993 "params": { 00:14:12.993 "cpumask": "1" 00:14:12.993 } 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "method": "ublk_start_disk", 00:14:12.993 "params": { 00:14:12.993 "bdev_name": "malloc0", 00:14:12.993 "ublk_id": 0, 00:14:12.993 "num_queues": 1, 00:14:12.993 "queue_depth": 128 00:14:12.993 } 00:14:12.993 } 00:14:12.993 ] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "nbd", 00:14:12.993 "config": [] 00:14:12.993 }, 00:14:12.993 { 00:14:12.993 "subsystem": "nvmf", 00:14:12.993 "config": [ 00:14:12.993 { 00:14:12.993 "method": "nvmf_set_config", 00:14:12.993 "params": { 00:14:12.993 "discovery_filter": "match_any", 00:14:12.993 "admin_cmd_passthru": { 00:14:12.993 "identify_ctrlr": false 00:14:12.993 }, 00:14:12.993 "dhchap_digests": [ 00:14:12.993 "sha256", 00:14:12.993 "sha384", 00:14:12.993 "sha512" 00:14:12.993 ], 00:14:12.993 "dhchap_dhgroups": [ 00:14:12.993 "null", 00:14:12.993 "ffdhe2048", 00:14:12.993 "ffdhe3072", 00:14:12.993 "ffdhe4096", 00:14:12.993 "ffdhe6144", 00:14:12.993 "ffdhe8192" 00:14:12.993 ] 00:14:12.993 } 00:14:12.994 }, 00:14:12.994 { 00:14:12.994 "method": "nvmf_set_max_subsystems", 00:14:12.994 "params": { 00:14:12.994 "max_subsystems": 1024 00:14:12.994 } 00:14:12.994 }, 00:14:12.994 { 00:14:12.994 "method": "nvmf_set_crdt", 00:14:12.994 "params": { 00:14:12.994 "crdt1": 0, 00:14:12.994 "crdt2": 0, 00:14:12.994 "crdt3": 0 00:14:12.994 } 00:14:12.994 } 00:14:12.994 ] 00:14:12.994 }, 00:14:12.994 { 00:14:12.994 "subsystem": "iscsi", 00:14:12.994 "config": [ 00:14:12.994 { 00:14:12.994 "method": "iscsi_set_options", 00:14:12.994 "params": { 00:14:12.994 "node_base": "iqn.2016-06.io.spdk", 00:14:12.994 "max_sessions": 128, 00:14:12.994 "max_connections_per_session": 2, 00:14:12.994 "max_queue_depth": 64, 00:14:12.994 "default_time2wait": 2, 00:14:12.994 "default_time2retain": 20, 00:14:12.994 "first_burst_length": 8192, 00:14:12.994 "immediate_data": true, 00:14:12.994 "allow_duplicated_isid": false, 00:14:12.994 "error_recovery_level": 0, 00:14:12.994 "nop_timeout": 60, 00:14:12.994 "nop_in_interval": 30, 00:14:12.994 "disable_chap": false, 00:14:12.994 "require_chap": false, 00:14:12.994 "mutual_chap": false, 00:14:12.994 "chap_group": 0, 00:14:12.994 "max_large_datain_per_connection": 64, 00:14:12.994 "max_r2t_per_connection": 4, 00:14:12.994 "pdu_pool_size": 36864, 00:14:12.994 "immediate_data_pool_size": 16384, 00:14:12.994 "data_out_pool_size": 2048 00:14:12.994 } 00:14:12.994 } 00:14:12.994 ] 00:14:12.994 } 00:14:12.994 ] 00:14:12.994 }' 00:14:12.994 18:25:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:12.994 [2024-10-08 18:25:01.791822] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:12.994 [2024-10-08 18:25:01.791971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83839 ] 00:14:13.255 [2024-10-08 18:25:01.926175] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:13.255 [2024-10-08 18:25:01.946503] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.255 [2024-10-08 18:25:02.030010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.826 [2024-10-08 18:25:02.490775] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:13.826 [2024-10-08 18:25:02.491210] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:13.826 [2024-10-08 18:25:02.498932] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:13.826 [2024-10-08 18:25:02.499059] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:13.826 [2024-10-08 18:25:02.499069] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:13.826 [2024-10-08 18:25:02.499080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:13.826 [2024-10-08 18:25:02.507901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:13.826 [2024-10-08 18:25:02.507933] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:13.826 [2024-10-08 18:25:02.511651] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:13.826 [2024-10-08 18:25:02.511793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:13.826 [2024-10-08 18:25:02.522779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:13.827 18:25:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83839 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83839 ']' 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83839 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83839 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:14.088 killing process with pid 83839 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83839' 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83839 00:14:14.088 18:25:02 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83839 00:14:14.348 [2024-10-08 18:25:03.138810] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:14.348 [2024-10-08 18:25:03.177801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:14.348 [2024-10-08 18:25:03.177956] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:14.348 [2024-10-08 18:25:03.185793] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:14.348 [2024-10-08 18:25:03.185863] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:14.349 [2024-10-08 18:25:03.185881] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:14.349 [2024-10-08 18:25:03.185912] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:14.349 [2024-10-08 18:25:03.186076] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:15.328 18:25:03 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:15.328 00:14:15.328 real 0m4.576s 00:14:15.328 user 0m2.772s 00:14:15.328 sys 0m2.448s 00:14:15.328 18:25:03 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:15.328 18:25:03 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:15.328 ************************************ 00:14:15.328 END TEST test_save_ublk_config 00:14:15.328 ************************************ 00:14:15.328 18:25:03 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83895 00:14:15.328 18:25:03 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:15.328 18:25:03 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83895 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@831 -- # '[' -z 83895 ']' 00:14:15.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.328 18:25:03 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.328 18:25:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:15.328 [2024-10-08 18:25:04.022008] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:15.328 [2024-10-08 18:25:04.022175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83895 ] 00:14:15.328 [2024-10-08 18:25:04.158857] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:15.328 [2024-10-08 18:25:04.175888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:15.589 [2024-10-08 18:25:04.249247] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.589 [2024-10-08 18:25:04.249287] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.160 18:25:04 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:16.160 18:25:04 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:16.160 18:25:04 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:16.160 18:25:04 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:16.160 18:25:04 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:16.160 18:25:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 ************************************ 00:14:16.160 START TEST test_create_ublk 00:14:16.160 ************************************ 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:16.160 18:25:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.160 [2024-10-08 18:25:04.901782] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:16.160 [2024-10-08 18:25:04.904112] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.160 18:25:04 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:16.160 18:25:04 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.160 18:25:04 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 [2024-10-08 18:25:05.026975] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:16.421 [2024-10-08 18:25:05.027470] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:16.421 [2024-10-08 18:25:05.027501] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:16.421 [2024-10-08 18:25:05.027511] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:16.421 [2024-10-08 18:25:05.034824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:16.421 [2024-10-08 18:25:05.034867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:16.421 [2024-10-08 18:25:05.042790] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:16.421 [2024-10-08 18:25:05.043561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:16.421 [2024-10-08 18:25:05.066784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:16.421 18:25:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:16.421 { 00:14:16.421 "ublk_device": "/dev/ublkb0", 00:14:16.421 "id": 0, 00:14:16.421 "queue_depth": 512, 00:14:16.421 "num_queues": 4, 00:14:16.421 "bdev_name": "Malloc0" 00:14:16.421 } 00:14:16.421 ]' 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:16.421 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:16.422 18:25:05 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:16.422 18:25:05 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:16.683 fio: verification read phase will never start because write phase uses all of runtime 00:14:16.683 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:16.683 fio-3.35 00:14:16.683 Starting 1 process 00:14:26.651 00:14:26.651 fio_test: (groupid=0, jobs=1): err= 0: pid=83940: Tue Oct 8 18:25:15 2024 00:14:26.651 write: IOPS=13.1k, BW=51.2MiB/s (53.7MB/s)(512MiB/10001msec); 0 zone resets 00:14:26.651 clat (usec): min=42, max=7990, avg=75.60, stdev=135.64 00:14:26.651 lat (usec): min=42, max=7993, avg=75.99, stdev=135.74 00:14:26.651 clat percentiles (usec): 00:14:26.651 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 64], 00:14:26.651 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 69], 00:14:26.651 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 78], 95.00th=[ 81], 00:14:26.651 | 99.00th=[ 139], 99.50th=[ 273], 99.90th=[ 2933], 99.95th=[ 3654], 00:14:26.651 | 99.99th=[ 4080] 00:14:26.651 bw ( KiB/s): min= 7400, max=56608, per=99.82%, avg=52337.26, stdev=11169.41, samples=19 00:14:26.651 iops : min= 1850, max=14152, avg=13084.32, stdev=2792.35, samples=19 00:14:26.651 lat (usec) : 50=0.01%, 100=98.75%, 250=0.54%, 500=0.48%, 750=0.01% 00:14:26.651 lat (usec) : 1000=0.02% 00:14:26.651 lat (msec) : 2=0.06%, 4=0.14%, 10=0.02% 00:14:26.651 cpu : usr=1.83%, sys=10.98%, ctx=131096, majf=0, minf=797 00:14:26.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:26.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:26.651 issued rwts: total=0,131095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:26.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:26.651 00:14:26.651 Run status group 0 (all jobs): 00:14:26.651 WRITE: bw=51.2MiB/s (53.7MB/s), 51.2MiB/s-51.2MiB/s (53.7MB/s-53.7MB/s), io=512MiB (537MB), run=10001-10001msec 00:14:26.651 00:14:26.651 Disk stats (read/write): 00:14:26.651 ublkb0: ios=0/129649, merge=0/0, ticks=0/8421, in_queue=8421, util=99.08% 00:14:26.651 18:25:15 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:26.651 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.651 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.651 [2024-10-08 18:25:15.490844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:26.910 [2024-10-08 18:25:15.534808] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:26.910 [2024-10-08 18:25:15.535496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:26.910 [2024-10-08 18:25:15.545807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:26.910 [2024-10-08 18:25:15.546053] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:26.910 [2024-10-08 18:25:15.546066] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.910 [2024-10-08 18:25:15.560862] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:26.910 request: 00:14:26.910 { 00:14:26.910 "ublk_id": 0, 00:14:26.910 "method": "ublk_stop_disk", 00:14:26.910 "req_id": 1 00:14:26.910 } 00:14:26.910 Got JSON-RPC error response 00:14:26.910 response: 00:14:26.910 { 00:14:26.910 "code": -19, 00:14:26.910 "message": "No such device" 00:14:26.910 } 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:26.910 18:25:15 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.910 [2024-10-08 18:25:15.576840] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:26.910 [2024-10-08 18:25:15.578091] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:26.910 [2024-10-08 18:25:15.578122] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:26.910 18:25:15 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:26.910 00:14:26.910 real 0m10.858s 00:14:26.910 user 0m0.482s 00:14:26.910 sys 0m1.195s 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.910 ************************************ 00:14:26.910 END TEST test_create_ublk 00:14:26.910 ************************************ 00:14:26.910 18:25:15 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 18:25:15 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:27.170 18:25:15 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:27.170 18:25:15 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:27.170 18:25:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 ************************************ 00:14:27.170 START TEST test_create_multi_ublk 00:14:27.170 ************************************ 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 [2024-10-08 18:25:15.800762] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:27.170 [2024-10-08 18:25:15.801684] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 [2024-10-08 18:25:15.884891] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:27.170 [2024-10-08 18:25:15.885198] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:27.170 [2024-10-08 18:25:15.885211] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:27.170 [2024-10-08 18:25:15.885226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.170 [2024-10-08 18:25:15.896813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.170 [2024-10-08 18:25:15.896830] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.170 [2024-10-08 18:25:15.908775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.170 [2024-10-08 18:25:15.909272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:27.170 [2024-10-08 18:25:15.924782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.170 18:25:15 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:27.170 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:27.170 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.170 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.170 [2024-10-08 18:25:16.007867] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:27.170 [2024-10-08 18:25:16.008173] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:27.170 [2024-10-08 18:25:16.008188] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:27.170 [2024-10-08 18:25:16.008193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.429 [2024-10-08 18:25:16.019808] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.429 [2024-10-08 18:25:16.019820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.429 [2024-10-08 18:25:16.031774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.429 [2024-10-08 18:25:16.032251] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:27.429 [2024-10-08 18:25:16.071786] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.429 [2024-10-08 18:25:16.155857] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:27.429 [2024-10-08 18:25:16.156156] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:27.429 [2024-10-08 18:25:16.156168] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:27.429 [2024-10-08 18:25:16.156174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.429 [2024-10-08 18:25:16.167795] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.429 [2024-10-08 18:25:16.167814] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.429 [2024-10-08 18:25:16.179777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.429 [2024-10-08 18:25:16.180257] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:27.429 [2024-10-08 18:25:16.204783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.429 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.688 [2024-10-08 18:25:16.284879] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:27.688 [2024-10-08 18:25:16.285172] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:27.688 [2024-10-08 18:25:16.285186] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:27.688 [2024-10-08 18:25:16.285190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.688 [2024-10-08 18:25:16.299790] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.688 [2024-10-08 18:25:16.299806] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.688 [2024-10-08 18:25:16.311784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.688 [2024-10-08 18:25:16.312271] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:27.688 [2024-10-08 18:25:16.324806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:27.688 { 00:14:27.688 "ublk_device": "/dev/ublkb0", 00:14:27.688 "id": 0, 00:14:27.688 "queue_depth": 512, 00:14:27.688 "num_queues": 4, 00:14:27.688 "bdev_name": "Malloc0" 00:14:27.688 }, 00:14:27.688 { 00:14:27.688 "ublk_device": "/dev/ublkb1", 00:14:27.688 "id": 1, 00:14:27.688 "queue_depth": 512, 00:14:27.688 "num_queues": 4, 00:14:27.688 "bdev_name": "Malloc1" 00:14:27.688 }, 00:14:27.688 { 00:14:27.688 "ublk_device": "/dev/ublkb2", 00:14:27.688 "id": 2, 00:14:27.688 "queue_depth": 512, 00:14:27.688 "num_queues": 4, 00:14:27.688 "bdev_name": "Malloc2" 00:14:27.688 }, 00:14:27.688 { 00:14:27.688 "ublk_device": "/dev/ublkb3", 00:14:27.688 "id": 3, 00:14:27.688 "queue_depth": 512, 00:14:27.688 "num_queues": 4, 00:14:27.688 "bdev_name": "Malloc3" 00:14:27.688 } 00:14:27.688 ]' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.688 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:27.948 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.206 18:25:16 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.206 [2024-10-08 18:25:17.007849] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.206 [2024-10-08 18:25:17.045308] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.206 [2024-10-08 18:25:17.046437] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.464 [2024-10-08 18:25:17.055781] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.464 [2024-10-08 18:25:17.056029] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:28.464 [2024-10-08 18:25:17.056043] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:28.464 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.464 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.464 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:28.464 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.465 [2024-10-08 18:25:17.071820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.465 [2024-10-08 18:25:17.111775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.465 [2024-10-08 18:25:17.112627] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.465 [2024-10-08 18:25:17.119783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.465 [2024-10-08 18:25:17.120021] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:28.465 [2024-10-08 18:25:17.120035] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.465 [2024-10-08 18:25:17.135844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.465 [2024-10-08 18:25:17.166180] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.465 [2024-10-08 18:25:17.167296] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.465 [2024-10-08 18:25:17.175777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.465 [2024-10-08 18:25:17.175999] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:28.465 [2024-10-08 18:25:17.176011] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.465 [2024-10-08 18:25:17.191837] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:28.465 [2024-10-08 18:25:17.225802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:28.465 [2024-10-08 18:25:17.226453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:28.465 [2024-10-08 18:25:17.235819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:28.465 [2024-10-08 18:25:17.236039] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:28.465 [2024-10-08 18:25:17.236052] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.465 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:28.724 [2024-10-08 18:25:17.434821] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:28.724 [2024-10-08 18:25:17.435981] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:28.724 [2024-10-08 18:25:17.436008] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.724 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:28.983 00:14:28.983 real 0m1.995s 00:14:28.983 user 0m0.826s 00:14:28.983 sys 0m0.138s 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.983 ************************************ 00:14:28.983 18:25:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.983 END TEST test_create_multi_ublk 00:14:28.983 ************************************ 00:14:28.983 18:25:17 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:28.983 18:25:17 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:28.983 18:25:17 ublk -- ublk/ublk.sh@130 -- # killprocess 83895 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@950 -- # '[' -z 83895 ']' 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@954 -- # kill -0 83895 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@955 -- # uname 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83895 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.983 killing process with pid 83895 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83895' 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@969 -- # kill 83895 00:14:28.983 18:25:17 ublk -- common/autotest_common.sh@974 -- # wait 83895 00:14:29.241 [2024-10-08 18:25:17.997253] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.241 [2024-10-08 18:25:17.997313] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:29.501 00:14:29.501 real 0m19.181s 00:14:29.501 user 0m28.188s 00:14:29.501 sys 0m8.395s 00:14:29.501 18:25:18 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:29.501 18:25:18 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:29.501 ************************************ 00:14:29.501 END TEST ublk 00:14:29.501 ************************************ 00:14:29.501 18:25:18 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.501 18:25:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:29.501 18:25:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:29.501 18:25:18 -- common/autotest_common.sh@10 -- # set +x 00:14:29.762 ************************************ 00:14:29.762 START TEST ublk_recovery 00:14:29.762 ************************************ 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:29.762 * Looking for test storage... 00:14:29.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.762 18:25:18 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.762 18:25:18 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:29.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.762 --rc genhtml_branch_coverage=1 00:14:29.762 --rc genhtml_function_coverage=1 00:14:29.763 --rc genhtml_legend=1 00:14:29.763 --rc geninfo_all_blocks=1 00:14:29.763 --rc geninfo_unexecuted_blocks=1 00:14:29.763 00:14:29.763 ' 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:29.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.763 --rc genhtml_branch_coverage=1 00:14:29.763 --rc genhtml_function_coverage=1 00:14:29.763 --rc genhtml_legend=1 00:14:29.763 --rc geninfo_all_blocks=1 00:14:29.763 --rc geninfo_unexecuted_blocks=1 00:14:29.763 00:14:29.763 ' 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:29.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.763 --rc genhtml_branch_coverage=1 00:14:29.763 --rc genhtml_function_coverage=1 00:14:29.763 --rc genhtml_legend=1 00:14:29.763 --rc geninfo_all_blocks=1 00:14:29.763 --rc geninfo_unexecuted_blocks=1 00:14:29.763 00:14:29.763 ' 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:29.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.763 --rc genhtml_branch_coverage=1 00:14:29.763 --rc genhtml_function_coverage=1 00:14:29.763 --rc genhtml_legend=1 00:14:29.763 --rc geninfo_all_blocks=1 00:14:29.763 --rc geninfo_unexecuted_blocks=1 00:14:29.763 00:14:29.763 ' 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:29.763 18:25:18 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84266 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:29.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84266 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84266 ']' 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:29.763 18:25:18 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:29.763 18:25:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:29.763 [2024-10-08 18:25:18.592833] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:29.763 [2024-10-08 18:25:18.592983] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84266 ] 00:14:30.024 [2024-10-08 18:25:18.738286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:30.024 [2024-10-08 18:25:18.756571] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:30.024 [2024-10-08 18:25:18.800920] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.024 [2024-10-08 18:25:18.801003] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:30.591 18:25:19 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.591 [2024-10-08 18:25:19.422776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:30.591 [2024-10-08 18:25:19.423713] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.591 18:25:19 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.591 18:25:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.850 malloc0 00:14:30.850 18:25:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.850 18:25:19 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:30.850 18:25:19 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.850 18:25:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:30.850 [2024-10-08 18:25:19.455878] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:30.850 [2024-10-08 18:25:19.455966] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:30.850 [2024-10-08 18:25:19.455981] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:30.850 [2024-10-08 18:25:19.455987] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:30.850 [2024-10-08 18:25:19.464855] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:30.850 [2024-10-08 18:25:19.464879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:30.850 [2024-10-08 18:25:19.471784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:30.850 [2024-10-08 18:25:19.471903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:30.850 [2024-10-08 18:25:19.488778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:30.850 1 00:14:30.850 18:25:19 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.850 18:25:19 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:31.785 18:25:20 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84297 00:14:31.785 18:25:20 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:31.785 18:25:20 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:31.785 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:31.785 fio-3.35 00:14:31.785 Starting 1 process 00:14:37.054 18:25:25 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84266 00:14:37.054 18:25:25 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:42.385 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84266 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:42.385 18:25:30 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84406 00:14:42.385 18:25:30 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:42.385 18:25:30 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.385 18:25:30 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84406 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84406 ']' 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:42.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.385 18:25:30 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.385 [2024-10-08 18:25:30.593520] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:14:42.385 [2024-10-08 18:25:30.593650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84406 ] 00:14:42.385 [2024-10-08 18:25:30.721317] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:42.385 [2024-10-08 18:25:30.739316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:42.385 [2024-10-08 18:25:30.790449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.385 [2024-10-08 18:25:30.790537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:42.645 18:25:31 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.645 [2024-10-08 18:25:31.466772] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:42.645 [2024-10-08 18:25:31.467856] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.645 18:25:31 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.645 18:25:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.905 malloc0 00:14:42.905 18:25:31 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.905 18:25:31 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:42.905 18:25:31 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.905 18:25:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.905 [2024-10-08 18:25:31.499186] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:42.905 [2024-10-08 18:25:31.499224] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:42.905 [2024-10-08 18:25:31.499235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:42.905 [2024-10-08 18:25:31.506806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:42.905 [2024-10-08 18:25:31.506835] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:42.905 [2024-10-08 18:25:31.506843] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:42.905 1 00:14:42.905 [2024-10-08 18:25:31.506934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:42.905 18:25:31 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.905 18:25:31 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84297 00:14:42.905 [2024-10-08 18:25:31.514782] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:42.905 [2024-10-08 18:25:31.520847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:42.905 [2024-10-08 18:25:31.530040] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:42.905 [2024-10-08 18:25:31.530064] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:39.124 00:15:39.124 fio_test: (groupid=0, jobs=1): err= 0: pid=84300: Tue Oct 8 18:26:20 2024 00:15:39.124 read: IOPS=25.3k, BW=98.7MiB/s (103MB/s)(5920MiB/60002msec) 00:15:39.124 slat (nsec): min=1383, max=215465, avg=5343.71, stdev=1469.19 00:15:39.124 clat (usec): min=726, max=6034.4k, avg=2489.81, stdev=39496.50 00:15:39.124 lat (usec): min=732, max=6034.4k, avg=2495.16, stdev=39496.50 00:15:39.124 clat percentiles (usec): 00:15:39.124 | 1.00th=[ 1876], 5.00th=[ 2008], 10.00th=[ 2040], 20.00th=[ 2057], 00:15:39.124 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:39.124 | 70.00th=[ 2147], 80.00th=[ 2147], 90.00th=[ 2212], 95.00th=[ 3097], 00:15:39.124 | 99.00th=[ 5014], 99.50th=[ 5800], 99.90th=[ 7308], 99.95th=[ 8586], 00:15:39.124 | 99.99th=[12911] 00:15:39.124 bw ( KiB/s): min=15112, max=116640, per=100.00%, avg=111276.81, stdev=13347.82, samples=108 00:15:39.124 iops : min= 3778, max=29160, avg=27819.20, stdev=3336.96, samples=108 00:15:39.124 write: IOPS=25.2k, BW=98.5MiB/s (103MB/s)(5912MiB/60002msec); 0 zone resets 00:15:39.124 slat (nsec): min=1275, max=407448, avg=5590.54, stdev=1494.57 00:15:39.124 clat (usec): min=722, max=6034.5k, avg=2569.25, stdev=38911.10 00:15:39.124 lat (usec): min=732, max=6034.5k, avg=2574.84, stdev=38911.10 00:15:39.125 clat percentiles (usec): 00:15:39.125 | 1.00th=[ 1942], 5.00th=[ 2114], 10.00th=[ 2147], 20.00th=[ 2147], 00:15:39.125 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:39.125 | 70.00th=[ 2245], 80.00th=[ 2245], 90.00th=[ 2311], 95.00th=[ 3064], 00:15:39.125 | 99.00th=[ 5014], 99.50th=[ 5932], 99.90th=[ 7242], 99.95th=[ 8717], 00:15:39.125 | 99.99th=[13173] 00:15:39.125 bw ( KiB/s): min=14776, max=116600, per=100.00%, avg=111127.70, stdev=13285.78, samples=108 00:15:39.125 iops : min= 3694, max=29150, avg=27781.93, stdev=3321.44, samples=108 00:15:39.125 lat (usec) : 750=0.01%, 1000=0.01% 00:15:39.125 lat (msec) : 2=3.00%, 4=94.38%, 10=2.60%, 20=0.01%, >=2000=0.01% 00:15:39.125 cpu : usr=5.69%, sys=28.32%, ctx=98372, majf=0, minf=13 00:15:39.125 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:39.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.125 issued rwts: total=1515555,1513553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.125 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.125 00:15:39.125 Run status group 0 (all jobs): 00:15:39.125 READ: bw=98.7MiB/s (103MB/s), 98.7MiB/s-98.7MiB/s (103MB/s-103MB/s), io=5920MiB (6208MB), run=60002-60002msec 00:15:39.125 WRITE: bw=98.5MiB/s (103MB/s), 98.5MiB/s-98.5MiB/s (103MB/s-103MB/s), io=5912MiB (6200MB), run=60002-60002msec 00:15:39.125 00:15:39.125 Disk stats (read/write): 00:15:39.125 ublkb1: ios=1512561/1510498, merge=0/0, ticks=3686554/3675340, in_queue=7361895, util=99.89% 00:15:39.125 18:26:20 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.125 [2024-10-08 18:26:20.761623] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:39.125 [2024-10-08 18:26:20.797800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:39.125 [2024-10-08 18:26:20.797946] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:39.125 [2024-10-08 18:26:20.805783] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:39.125 [2024-10-08 18:26:20.805873] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:39.125 [2024-10-08 18:26:20.805896] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.125 18:26:20 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.125 [2024-10-08 18:26:20.821838] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:39.125 [2024-10-08 18:26:20.823169] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:39.125 [2024-10-08 18:26:20.823195] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.125 18:26:20 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:39.125 18:26:20 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:39.125 18:26:20 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84406 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84406 ']' 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84406 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84406 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84406' 00:15:39.125 killing process with pid 84406 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84406 00:15:39.125 18:26:20 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84406 00:15:39.125 [2024-10-08 18:26:21.027246] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:39.125 [2024-10-08 18:26:21.027295] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:39.125 ************************************ 00:15:39.125 END TEST ublk_recovery 00:15:39.125 ************************************ 00:15:39.125 00:15:39.125 real 1m2.975s 00:15:39.125 user 1m37.112s 00:15:39.125 sys 0m38.592s 00:15:39.125 18:26:21 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.125 18:26:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:39.125 18:26:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:39.125 18:26:21 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:39.125 18:26:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.125 18:26:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:39.125 18:26:21 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:39.125 18:26:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:39.125 18:26:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.125 18:26:21 -- common/autotest_common.sh@10 -- # set +x 00:15:39.125 ************************************ 00:15:39.125 START TEST ftl 00:15:39.125 ************************************ 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:39.125 * Looking for test storage... 00:15:39.125 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.125 18:26:21 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.125 18:26:21 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.125 18:26:21 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.125 18:26:21 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.125 18:26:21 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.125 18:26:21 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:39.125 18:26:21 ftl -- scripts/common.sh@345 -- # : 1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.125 18:26:21 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.125 18:26:21 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@353 -- # local d=1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.125 18:26:21 ftl -- scripts/common.sh@355 -- # echo 1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.125 18:26:21 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@353 -- # local d=2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.125 18:26:21 ftl -- scripts/common.sh@355 -- # echo 2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.125 18:26:21 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.125 18:26:21 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.125 18:26:21 ftl -- scripts/common.sh@368 -- # return 0 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.125 --rc genhtml_branch_coverage=1 00:15:39.125 --rc genhtml_function_coverage=1 00:15:39.125 --rc genhtml_legend=1 00:15:39.125 --rc geninfo_all_blocks=1 00:15:39.125 --rc geninfo_unexecuted_blocks=1 00:15:39.125 00:15:39.125 ' 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.125 --rc genhtml_branch_coverage=1 00:15:39.125 --rc genhtml_function_coverage=1 00:15:39.125 --rc genhtml_legend=1 00:15:39.125 --rc geninfo_all_blocks=1 00:15:39.125 --rc geninfo_unexecuted_blocks=1 00:15:39.125 00:15:39.125 ' 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.125 --rc genhtml_branch_coverage=1 00:15:39.125 --rc genhtml_function_coverage=1 00:15:39.125 --rc genhtml_legend=1 00:15:39.125 --rc geninfo_all_blocks=1 00:15:39.125 --rc geninfo_unexecuted_blocks=1 00:15:39.125 00:15:39.125 ' 00:15:39.125 18:26:21 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:39.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.125 --rc genhtml_branch_coverage=1 00:15:39.125 --rc genhtml_function_coverage=1 00:15:39.125 --rc genhtml_legend=1 00:15:39.125 --rc geninfo_all_blocks=1 00:15:39.125 --rc geninfo_unexecuted_blocks=1 00:15:39.125 00:15:39.125 ' 00:15:39.125 18:26:21 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:39.125 18:26:21 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:39.125 18:26:21 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.125 18:26:21 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.125 18:26:21 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:39.125 18:26:21 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:39.125 18:26:21 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.125 18:26:21 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:39.125 18:26:21 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:39.126 18:26:21 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.126 18:26:21 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.126 18:26:21 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:39.126 18:26:21 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:39.126 18:26:21 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:39.126 18:26:21 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:39.126 18:26:21 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:39.126 18:26:21 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:39.126 18:26:21 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.126 18:26:21 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.126 18:26:21 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:39.126 18:26:21 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:39.126 18:26:21 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:39.126 18:26:21 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:39.126 18:26:21 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:39.126 18:26:21 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:39.126 18:26:21 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:39.126 18:26:21 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:39.126 18:26:21 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.126 18:26:21 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:39.126 18:26:21 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:39.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:39.126 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:39.126 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:39.126 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:39.126 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:39.126 18:26:22 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85202 00:15:39.126 18:26:22 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85202 00:15:39.126 18:26:22 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@831 -- # '[' -z 85202 ']' 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.126 18:26:22 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:39.126 [2024-10-08 18:26:22.193155] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:39.126 [2024-10-08 18:26:22.193273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85202 ] 00:15:39.126 [2024-10-08 18:26:22.322033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.126 [2024-10-08 18:26:22.334568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.126 [2024-10-08 18:26:22.373486] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.126 18:26:23 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.126 18:26:23 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:39.126 18:26:23 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:39.126 18:26:23 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:39.126 18:26:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:39.126 18:26:23 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@50 -- # break 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@63 -- # break 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@66 -- # killprocess 85202 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@950 -- # '[' -z 85202 ']' 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@954 -- # kill -0 85202 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@955 -- # uname 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85202 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:39.126 killing process with pid 85202 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85202' 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@969 -- # kill 85202 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@974 -- # wait 85202 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:39.126 18:26:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.126 18:26:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:39.126 ************************************ 00:15:39.126 START TEST ftl_fio_basic 00:15:39.126 ************************************ 00:15:39.126 18:26:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:39.126 * Looking for test storage... 00:15:39.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.126 18:26:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:39.126 18:26:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:39.126 18:26:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.126 --rc genhtml_branch_coverage=1 00:15:39.126 --rc genhtml_function_coverage=1 00:15:39.126 --rc genhtml_legend=1 00:15:39.126 --rc geninfo_all_blocks=1 00:15:39.126 --rc geninfo_unexecuted_blocks=1 00:15:39.126 00:15:39.126 ' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.126 --rc genhtml_branch_coverage=1 00:15:39.126 --rc genhtml_function_coverage=1 00:15:39.126 --rc genhtml_legend=1 00:15:39.126 --rc geninfo_all_blocks=1 00:15:39.126 --rc geninfo_unexecuted_blocks=1 00:15:39.126 00:15:39.126 ' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:39.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.126 --rc genhtml_branch_coverage=1 00:15:39.126 --rc genhtml_function_coverage=1 00:15:39.126 --rc genhtml_legend=1 00:15:39.126 --rc geninfo_all_blocks=1 00:15:39.126 --rc geninfo_unexecuted_blocks=1 00:15:39.126 00:15:39.126 ' 00:15:39.126 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:39.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.127 --rc genhtml_branch_coverage=1 00:15:39.127 --rc genhtml_function_coverage=1 00:15:39.127 --rc genhtml_legend=1 00:15:39.127 --rc geninfo_all_blocks=1 00:15:39.127 --rc geninfo_unexecuted_blocks=1 00:15:39.127 00:15:39.127 ' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85323 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85323 00:15:39.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85323 ']' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:39.127 [2024-10-08 18:26:25.161275] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:15:39.127 [2024-10-08 18:26:25.161450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85323 ] 00:15:39.127 [2024-10-08 18:26:25.299435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:39.127 [2024-10-08 18:26:25.316933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:39.127 [2024-10-08 18:26:25.361485] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.127 [2024-10-08 18:26:25.361835] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.127 [2024-10-08 18:26:25.361875] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:39.127 18:26:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.127 { 00:15:39.127 "name": "nvme0n1", 00:15:39.127 "aliases": [ 00:15:39.127 "c45918bb-e66b-4d7c-b14c-e937fa733231" 00:15:39.127 ], 00:15:39.127 "product_name": "NVMe disk", 00:15:39.127 "block_size": 4096, 00:15:39.127 "num_blocks": 1310720, 00:15:39.127 "uuid": "c45918bb-e66b-4d7c-b14c-e937fa733231", 00:15:39.127 "numa_id": -1, 00:15:39.127 "assigned_rate_limits": { 00:15:39.127 "rw_ios_per_sec": 0, 00:15:39.127 "rw_mbytes_per_sec": 0, 00:15:39.127 "r_mbytes_per_sec": 0, 00:15:39.127 "w_mbytes_per_sec": 0 00:15:39.127 }, 00:15:39.127 "claimed": false, 00:15:39.127 "zoned": false, 00:15:39.127 "supported_io_types": { 00:15:39.127 "read": true, 00:15:39.127 "write": true, 00:15:39.127 "unmap": true, 00:15:39.127 "flush": true, 00:15:39.127 "reset": true, 00:15:39.127 "nvme_admin": true, 00:15:39.127 "nvme_io": true, 00:15:39.127 "nvme_io_md": false, 00:15:39.127 "write_zeroes": true, 00:15:39.127 "zcopy": false, 00:15:39.127 "get_zone_info": false, 00:15:39.127 "zone_management": false, 00:15:39.127 "zone_append": false, 00:15:39.127 "compare": true, 00:15:39.127 "compare_and_write": false, 00:15:39.127 "abort": true, 00:15:39.127 "seek_hole": false, 00:15:39.127 "seek_data": false, 00:15:39.127 "copy": true, 00:15:39.127 "nvme_iov_md": false 00:15:39.127 }, 00:15:39.127 "driver_specific": { 00:15:39.127 "nvme": [ 00:15:39.127 { 00:15:39.127 "pci_address": "0000:00:11.0", 00:15:39.127 "trid": { 00:15:39.127 "trtype": "PCIe", 00:15:39.127 "traddr": "0000:00:11.0" 00:15:39.127 }, 00:15:39.127 "ctrlr_data": { 00:15:39.127 "cntlid": 0, 00:15:39.127 "vendor_id": "0x1b36", 00:15:39.127 "model_number": "QEMU NVMe Ctrl", 00:15:39.127 "serial_number": "12341", 00:15:39.127 "firmware_revision": "8.0.0", 00:15:39.127 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:39.127 "oacs": { 00:15:39.127 "security": 0, 00:15:39.127 "format": 1, 00:15:39.127 "firmware": 0, 00:15:39.127 "ns_manage": 1 00:15:39.127 }, 00:15:39.127 "multi_ctrlr": false, 00:15:39.127 "ana_reporting": false 00:15:39.127 }, 00:15:39.127 "vs": { 00:15:39.127 "nvme_version": "1.4" 00:15:39.127 }, 00:15:39.127 "ns_data": { 00:15:39.127 "id": 1, 00:15:39.127 "can_share": false 00:15:39.127 } 00:15:39.127 } 00:15:39.127 ], 00:15:39.127 "mp_policy": "active_passive" 00:15:39.127 } 00:15:39.127 } 00:15:39.127 ]' 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.127 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=8c0b71b0-841c-422f-9a3e-4f46ea08cf58 00:15:39.128 18:26:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8c0b71b0-841c-422f-9a3e-4f46ea08cf58 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.128 { 00:15:39.128 "name": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.128 "aliases": [ 00:15:39.128 "lvs/nvme0n1p0" 00:15:39.128 ], 00:15:39.128 "product_name": "Logical Volume", 00:15:39.128 "block_size": 4096, 00:15:39.128 "num_blocks": 26476544, 00:15:39.128 "uuid": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.128 "assigned_rate_limits": { 00:15:39.128 "rw_ios_per_sec": 0, 00:15:39.128 "rw_mbytes_per_sec": 0, 00:15:39.128 "r_mbytes_per_sec": 0, 00:15:39.128 "w_mbytes_per_sec": 0 00:15:39.128 }, 00:15:39.128 "claimed": false, 00:15:39.128 "zoned": false, 00:15:39.128 "supported_io_types": { 00:15:39.128 "read": true, 00:15:39.128 "write": true, 00:15:39.128 "unmap": true, 00:15:39.128 "flush": false, 00:15:39.128 "reset": true, 00:15:39.128 "nvme_admin": false, 00:15:39.128 "nvme_io": false, 00:15:39.128 "nvme_io_md": false, 00:15:39.128 "write_zeroes": true, 00:15:39.128 "zcopy": false, 00:15:39.128 "get_zone_info": false, 00:15:39.128 "zone_management": false, 00:15:39.128 "zone_append": false, 00:15:39.128 "compare": false, 00:15:39.128 "compare_and_write": false, 00:15:39.128 "abort": false, 00:15:39.128 "seek_hole": true, 00:15:39.128 "seek_data": true, 00:15:39.128 "copy": false, 00:15:39.128 "nvme_iov_md": false 00:15:39.128 }, 00:15:39.128 "driver_specific": { 00:15:39.128 "lvol": { 00:15:39.128 "lvol_store_uuid": "8c0b71b0-841c-422f-9a3e-4f46ea08cf58", 00:15:39.128 "base_bdev": "nvme0n1", 00:15:39.128 "thin_provision": true, 00:15:39.128 "num_allocated_clusters": 0, 00:15:39.128 "snapshot": false, 00:15:39.128 "clone": false, 00:15:39.128 "esnap_clone": false 00:15:39.128 } 00:15:39.128 } 00:15:39.128 } 00:15:39.128 ]' 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.128 { 00:15:39.128 "name": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.128 "aliases": [ 00:15:39.128 "lvs/nvme0n1p0" 00:15:39.128 ], 00:15:39.128 "product_name": "Logical Volume", 00:15:39.128 "block_size": 4096, 00:15:39.128 "num_blocks": 26476544, 00:15:39.128 "uuid": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.128 "assigned_rate_limits": { 00:15:39.128 "rw_ios_per_sec": 0, 00:15:39.128 "rw_mbytes_per_sec": 0, 00:15:39.128 "r_mbytes_per_sec": 0, 00:15:39.128 "w_mbytes_per_sec": 0 00:15:39.128 }, 00:15:39.128 "claimed": false, 00:15:39.128 "zoned": false, 00:15:39.128 "supported_io_types": { 00:15:39.128 "read": true, 00:15:39.128 "write": true, 00:15:39.128 "unmap": true, 00:15:39.128 "flush": false, 00:15:39.128 "reset": true, 00:15:39.128 "nvme_admin": false, 00:15:39.128 "nvme_io": false, 00:15:39.128 "nvme_io_md": false, 00:15:39.128 "write_zeroes": true, 00:15:39.128 "zcopy": false, 00:15:39.128 "get_zone_info": false, 00:15:39.128 "zone_management": false, 00:15:39.128 "zone_append": false, 00:15:39.128 "compare": false, 00:15:39.128 "compare_and_write": false, 00:15:39.128 "abort": false, 00:15:39.128 "seek_hole": true, 00:15:39.128 "seek_data": true, 00:15:39.128 "copy": false, 00:15:39.128 "nvme_iov_md": false 00:15:39.128 }, 00:15:39.128 "driver_specific": { 00:15:39.128 "lvol": { 00:15:39.128 "lvol_store_uuid": "8c0b71b0-841c-422f-9a3e-4f46ea08cf58", 00:15:39.128 "base_bdev": "nvme0n1", 00:15:39.128 "thin_provision": true, 00:15:39.128 "num_allocated_clusters": 0, 00:15:39.128 "snapshot": false, 00:15:39.128 "clone": false, 00:15:39.128 "esnap_clone": false 00:15:39.128 } 00:15:39.128 } 00:15:39.128 } 00:15:39.128 ]' 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.128 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.386 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:39.386 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:39.386 18:26:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:39.387 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:39.387 18:26:27 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:39.387 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:39.387 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f3fe9078-a2ee-475f-89c8-dd0d0d849be4 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:39.646 { 00:15:39.646 "name": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.646 "aliases": [ 00:15:39.646 "lvs/nvme0n1p0" 00:15:39.646 ], 00:15:39.646 "product_name": "Logical Volume", 00:15:39.646 "block_size": 4096, 00:15:39.646 "num_blocks": 26476544, 00:15:39.646 "uuid": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:39.646 "assigned_rate_limits": { 00:15:39.646 "rw_ios_per_sec": 0, 00:15:39.646 "rw_mbytes_per_sec": 0, 00:15:39.646 "r_mbytes_per_sec": 0, 00:15:39.646 "w_mbytes_per_sec": 0 00:15:39.646 }, 00:15:39.646 "claimed": false, 00:15:39.646 "zoned": false, 00:15:39.646 "supported_io_types": { 00:15:39.646 "read": true, 00:15:39.646 "write": true, 00:15:39.646 "unmap": true, 00:15:39.646 "flush": false, 00:15:39.646 "reset": true, 00:15:39.646 "nvme_admin": false, 00:15:39.646 "nvme_io": false, 00:15:39.646 "nvme_io_md": false, 00:15:39.646 "write_zeroes": true, 00:15:39.646 "zcopy": false, 00:15:39.646 "get_zone_info": false, 00:15:39.646 "zone_management": false, 00:15:39.646 "zone_append": false, 00:15:39.646 "compare": false, 00:15:39.646 "compare_and_write": false, 00:15:39.646 "abort": false, 00:15:39.646 "seek_hole": true, 00:15:39.646 "seek_data": true, 00:15:39.646 "copy": false, 00:15:39.646 "nvme_iov_md": false 00:15:39.646 }, 00:15:39.646 "driver_specific": { 00:15:39.646 "lvol": { 00:15:39.646 "lvol_store_uuid": "8c0b71b0-841c-422f-9a3e-4f46ea08cf58", 00:15:39.646 "base_bdev": "nvme0n1", 00:15:39.646 "thin_provision": true, 00:15:39.646 "num_allocated_clusters": 0, 00:15:39.646 "snapshot": false, 00:15:39.646 "clone": false, 00:15:39.646 "esnap_clone": false 00:15:39.646 } 00:15:39.646 } 00:15:39.646 } 00:15:39.646 ]' 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:39.646 18:26:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f3fe9078-a2ee-475f-89c8-dd0d0d849be4 -c nvc0n1p0 --l2p_dram_limit 60 00:15:39.906 [2024-10-08 18:26:28.637436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.637475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:39.906 [2024-10-08 18:26:28.637488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:39.906 [2024-10-08 18:26:28.637495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.637557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.637567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:39.906 [2024-10-08 18:26:28.637576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:15:39.906 [2024-10-08 18:26:28.637582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.637621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:39.906 [2024-10-08 18:26:28.637828] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:39.906 [2024-10-08 18:26:28.637851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.637858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:39.906 [2024-10-08 18:26:28.637866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:15:39.906 [2024-10-08 18:26:28.637873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.637920] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID faace634-33ce-47a9-8e37-6e5963d3d355 00:15:39.906 [2024-10-08 18:26:28.638915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.638945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:39.906 [2024-10-08 18:26:28.638954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:39.906 [2024-10-08 18:26:28.638961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.643689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.643721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:39.906 [2024-10-08 18:26:28.643729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.642 ms 00:15:39.906 [2024-10-08 18:26:28.643738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.643885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.643900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:39.906 [2024-10-08 18:26:28.643907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:15:39.906 [2024-10-08 18:26:28.643916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.906 [2024-10-08 18:26:28.643975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.906 [2024-10-08 18:26:28.643985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:39.907 [2024-10-08 18:26:28.643992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:39.907 [2024-10-08 18:26:28.643999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.907 [2024-10-08 18:26:28.644033] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:39.907 [2024-10-08 18:26:28.645291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.907 [2024-10-08 18:26:28.645328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:39.907 [2024-10-08 18:26:28.645337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.260 ms 00:15:39.907 [2024-10-08 18:26:28.645343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.907 [2024-10-08 18:26:28.645389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.907 [2024-10-08 18:26:28.645396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:39.907 [2024-10-08 18:26:28.645406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:39.907 [2024-10-08 18:26:28.645413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.907 [2024-10-08 18:26:28.645438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:39.907 [2024-10-08 18:26:28.645572] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:39.907 [2024-10-08 18:26:28.645602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:39.907 [2024-10-08 18:26:28.645612] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:39.907 [2024-10-08 18:26:28.645624] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:39.907 [2024-10-08 18:26:28.645631] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:39.907 [2024-10-08 18:26:28.645638] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:39.907 [2024-10-08 18:26:28.645647] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:39.907 [2024-10-08 18:26:28.645654] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:39.907 [2024-10-08 18:26:28.645660] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:39.907 [2024-10-08 18:26:28.645668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.907 [2024-10-08 18:26:28.645673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:39.907 [2024-10-08 18:26:28.645681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:15:39.907 [2024-10-08 18:26:28.645687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.907 [2024-10-08 18:26:28.645775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.907 [2024-10-08 18:26:28.645783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:39.907 [2024-10-08 18:26:28.645791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:15:39.907 [2024-10-08 18:26:28.645806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.907 [2024-10-08 18:26:28.645906] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:39.907 [2024-10-08 18:26:28.645917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:39.907 [2024-10-08 18:26:28.645925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.907 [2024-10-08 18:26:28.645931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.645938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:39.907 [2024-10-08 18:26:28.645945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.645953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:39.907 [2024-10-08 18:26:28.645960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:39.907 [2024-10-08 18:26:28.645968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:39.907 [2024-10-08 18:26:28.645973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.907 [2024-10-08 18:26:28.645983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:39.907 [2024-10-08 18:26:28.645989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:39.907 [2024-10-08 18:26:28.645998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:39.907 [2024-10-08 18:26:28.646004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:39.907 [2024-10-08 18:26:28.646012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:39.907 [2024-10-08 18:26:28.646018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:39.907 [2024-10-08 18:26:28.646031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:39.907 [2024-10-08 18:26:28.646052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:39.907 [2024-10-08 18:26:28.646071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:39.907 [2024-10-08 18:26:28.646100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:39.907 [2024-10-08 18:26:28.646121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:39.907 [2024-10-08 18:26:28.646140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.907 [2024-10-08 18:26:28.646154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:39.907 [2024-10-08 18:26:28.646159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:39.907 [2024-10-08 18:26:28.646168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:39.907 [2024-10-08 18:26:28.646173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:39.907 [2024-10-08 18:26:28.646180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:39.907 [2024-10-08 18:26:28.646186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:39.907 [2024-10-08 18:26:28.646198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:39.907 [2024-10-08 18:26:28.646207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646214] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:39.907 [2024-10-08 18:26:28.646223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:39.907 [2024-10-08 18:26:28.646230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:39.907 [2024-10-08 18:26:28.646244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:39.907 [2024-10-08 18:26:28.646251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:39.907 [2024-10-08 18:26:28.646257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:39.907 [2024-10-08 18:26:28.646265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:39.907 [2024-10-08 18:26:28.646271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:39.907 [2024-10-08 18:26:28.646278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:39.907 [2024-10-08 18:26:28.646287] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:39.907 [2024-10-08 18:26:28.646298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:39.907 [2024-10-08 18:26:28.646321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:39.907 [2024-10-08 18:26:28.646327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:39.907 [2024-10-08 18:26:28.646334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:39.907 [2024-10-08 18:26:28.646339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:39.907 [2024-10-08 18:26:28.646347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:39.907 [2024-10-08 18:26:28.646352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:39.907 [2024-10-08 18:26:28.646359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:39.907 [2024-10-08 18:26:28.646365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:39.907 [2024-10-08 18:26:28.646372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:39.907 [2024-10-08 18:26:28.646402] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:39.907 [2024-10-08 18:26:28.646409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:39.907 [2024-10-08 18:26:28.646416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:39.908 [2024-10-08 18:26:28.646423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:39.908 [2024-10-08 18:26:28.646429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:39.908 [2024-10-08 18:26:28.646437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:39.908 [2024-10-08 18:26:28.646443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:39.908 [2024-10-08 18:26:28.646451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:39.908 [2024-10-08 18:26:28.646456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:15:39.908 [2024-10-08 18:26:28.646463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:39.908 [2024-10-08 18:26:28.646536] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:39.908 [2024-10-08 18:26:28.646545] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:42.443 [2024-10-08 18:26:30.766272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.766344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:42.443 [2024-10-08 18:26:30.766359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2119.729 ms 00:15:42.443 [2024-10-08 18:26:30.766369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.784052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.784103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.443 [2024-10-08 18:26:30.784115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.592 ms 00:15:42.443 [2024-10-08 18:26:30.784127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.784228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.784240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:42.443 [2024-10-08 18:26:30.784262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:15:42.443 [2024-10-08 18:26:30.784271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.793209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.793258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.443 [2024-10-08 18:26:30.793270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.868 ms 00:15:42.443 [2024-10-08 18:26:30.793285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.793338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.793351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.443 [2024-10-08 18:26:30.793371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.443 [2024-10-08 18:26:30.793383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.793765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.793800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.443 [2024-10-08 18:26:30.793810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:15:42.443 [2024-10-08 18:26:30.793824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.793973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.793989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.443 [2024-10-08 18:26:30.794000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:15:42.443 [2024-10-08 18:26:30.794012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.799863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.799900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.443 [2024-10-08 18:26:30.799911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.818 ms 00:15:42.443 [2024-10-08 18:26:30.799922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.808347] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:42.443 [2024-10-08 18:26:30.822622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.822653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:42.443 [2024-10-08 18:26:30.822666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.596 ms 00:15:42.443 [2024-10-08 18:26:30.822674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.859351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.859386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:42.443 [2024-10-08 18:26:30.859403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.644 ms 00:15:42.443 [2024-10-08 18:26:30.859410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.859591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.859602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:42.443 [2024-10-08 18:26:30.859615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:15:42.443 [2024-10-08 18:26:30.859622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.863061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.443 [2024-10-08 18:26:30.863094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:42.443 [2024-10-08 18:26:30.863108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.386 ms 00:15:42.443 [2024-10-08 18:26:30.863115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.443 [2024-10-08 18:26:30.865730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.865773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:42.444 [2024-10-08 18:26:30.865785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.550 ms 00:15:42.444 [2024-10-08 18:26:30.865793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.866087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.866123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:42.444 [2024-10-08 18:26:30.866136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:15:42.444 [2024-10-08 18:26:30.866144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.888398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.888440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:42.444 [2024-10-08 18:26:30.888455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.219 ms 00:15:42.444 [2024-10-08 18:26:30.888463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.892501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.892543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:42.444 [2024-10-08 18:26:30.892563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.975 ms 00:15:42.444 [2024-10-08 18:26:30.892571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.896317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.896348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:42.444 [2024-10-08 18:26:30.896359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:15:42.444 [2024-10-08 18:26:30.896366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.900259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.900289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:42.444 [2024-10-08 18:26:30.900303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.848 ms 00:15:42.444 [2024-10-08 18:26:30.900310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.900364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.900374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:42.444 [2024-10-08 18:26:30.900385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:42.444 [2024-10-08 18:26:30.900392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.900472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.444 [2024-10-08 18:26:30.900482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:42.444 [2024-10-08 18:26:30.900491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:15:42.444 [2024-10-08 18:26:30.900501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.444 [2024-10-08 18:26:30.901420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2263.570 ms, result 0 00:15:42.444 { 00:15:42.444 "name": "ftl0", 00:15:42.444 "uuid": "faace634-33ce-47a9-8e37-6e5963d3d355" 00:15:42.444 } 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:42.444 18:26:30 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:42.444 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:42.703 [ 00:15:42.703 { 00:15:42.703 "name": "ftl0", 00:15:42.703 "aliases": [ 00:15:42.703 "faace634-33ce-47a9-8e37-6e5963d3d355" 00:15:42.703 ], 00:15:42.703 "product_name": "FTL disk", 00:15:42.703 "block_size": 4096, 00:15:42.703 "num_blocks": 20971520, 00:15:42.703 "uuid": "faace634-33ce-47a9-8e37-6e5963d3d355", 00:15:42.703 "assigned_rate_limits": { 00:15:42.703 "rw_ios_per_sec": 0, 00:15:42.703 "rw_mbytes_per_sec": 0, 00:15:42.703 "r_mbytes_per_sec": 0, 00:15:42.703 "w_mbytes_per_sec": 0 00:15:42.703 }, 00:15:42.703 "claimed": false, 00:15:42.703 "zoned": false, 00:15:42.703 "supported_io_types": { 00:15:42.703 "read": true, 00:15:42.703 "write": true, 00:15:42.703 "unmap": true, 00:15:42.703 "flush": true, 00:15:42.703 "reset": false, 00:15:42.703 "nvme_admin": false, 00:15:42.703 "nvme_io": false, 00:15:42.703 "nvme_io_md": false, 00:15:42.703 "write_zeroes": true, 00:15:42.703 "zcopy": false, 00:15:42.703 "get_zone_info": false, 00:15:42.703 "zone_management": false, 00:15:42.703 "zone_append": false, 00:15:42.703 "compare": false, 00:15:42.703 "compare_and_write": false, 00:15:42.703 "abort": false, 00:15:42.703 "seek_hole": false, 00:15:42.703 "seek_data": false, 00:15:42.703 "copy": false, 00:15:42.703 "nvme_iov_md": false 00:15:42.703 }, 00:15:42.703 "driver_specific": { 00:15:42.703 "ftl": { 00:15:42.703 "base_bdev": "f3fe9078-a2ee-475f-89c8-dd0d0d849be4", 00:15:42.703 "cache": "nvc0n1p0" 00:15:42.703 } 00:15:42.703 } 00:15:42.703 } 00:15:42.703 ] 00:15:42.703 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:42.703 18:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:42.703 18:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:42.703 18:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:42.703 18:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:42.964 [2024-10-08 18:26:31.701243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.701280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:42.964 [2024-10-08 18:26:31.701290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:42.964 [2024-10-08 18:26:31.701298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.701331] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:42.964 [2024-10-08 18:26:31.701772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.701794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:42.964 [2024-10-08 18:26:31.701805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:15:42.964 [2024-10-08 18:26:31.701811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.702252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.702271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:42.964 [2024-10-08 18:26:31.702280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:15:42.964 [2024-10-08 18:26:31.702296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.704709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.704728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:42.964 [2024-10-08 18:26:31.704738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.382 ms 00:15:42.964 [2024-10-08 18:26:31.704745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.709346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.709371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:42.964 [2024-10-08 18:26:31.709390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.569 ms 00:15:42.964 [2024-10-08 18:26:31.709397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.711045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.711076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:42.964 [2024-10-08 18:26:31.711087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.565 ms 00:15:42.964 [2024-10-08 18:26:31.711092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.714689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.714721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:42.964 [2024-10-08 18:26:31.714730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:15:42.964 [2024-10-08 18:26:31.714738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.714921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.714935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:42.964 [2024-10-08 18:26:31.714944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:15:42.964 [2024-10-08 18:26:31.714950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.716131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.716159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:15:42.964 [2024-10-08 18:26:31.716167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:15:42.964 [2024-10-08 18:26:31.716173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.717214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.717240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:15:42.964 [2024-10-08 18:26:31.717249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:15:42.964 [2024-10-08 18:26:31.717254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.718118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.718145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:42.964 [2024-10-08 18:26:31.718155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:15:42.964 [2024-10-08 18:26:31.718161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.719083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.964 [2024-10-08 18:26:31.719110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:42.964 [2024-10-08 18:26:31.719120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.833 ms 00:15:42.964 [2024-10-08 18:26:31.719126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.964 [2024-10-08 18:26:31.719157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:42.964 [2024-10-08 18:26:31.719168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:42.964 [2024-10-08 18:26:31.719365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:42.965 [2024-10-08 18:26:31.719871] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:42.965 [2024-10-08 18:26:31.719879] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: faace634-33ce-47a9-8e37-6e5963d3d355 00:15:42.965 [2024-10-08 18:26:31.719885] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:42.965 [2024-10-08 18:26:31.719893] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:42.965 [2024-10-08 18:26:31.719900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:42.965 [2024-10-08 18:26:31.719907] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:42.965 [2024-10-08 18:26:31.719912] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:42.965 [2024-10-08 18:26:31.719919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:42.965 [2024-10-08 18:26:31.719924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:42.965 [2024-10-08 18:26:31.719930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:42.965 [2024-10-08 18:26:31.719935] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:42.965 [2024-10-08 18:26:31.719942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.965 [2024-10-08 18:26:31.719948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:42.965 [2024-10-08 18:26:31.719956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:15:42.965 [2024-10-08 18:26:31.719962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.965 [2024-10-08 18:26:31.721296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.965 [2024-10-08 18:26:31.721318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:42.965 [2024-10-08 18:26:31.721327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.300 ms 00:15:42.966 [2024-10-08 18:26:31.721334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.721431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:42.966 [2024-10-08 18:26:31.721444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:42.966 [2024-10-08 18:26:31.721452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:42.966 [2024-10-08 18:26:31.721458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.726055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.726083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:42.966 [2024-10-08 18:26:31.726092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.726108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.726156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.726164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:42.966 [2024-10-08 18:26:31.726172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.726178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.726247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.726259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:42.966 [2024-10-08 18:26:31.726267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.726273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.726302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.726312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:42.966 [2024-10-08 18:26:31.726320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.726326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.734501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.734533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:42.966 [2024-10-08 18:26:31.734543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.734549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:42.966 [2024-10-08 18:26:31.741511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:42.966 [2024-10-08 18:26:31.741611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:42.966 [2024-10-08 18:26:31.741680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:42.966 [2024-10-08 18:26:31.741791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:42.966 [2024-10-08 18:26:31.741855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:42.966 [2024-10-08 18:26:31.741919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.741926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.741980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:42.966 [2024-10-08 18:26:31.741992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:42.966 [2024-10-08 18:26:31.741999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:42.966 [2024-10-08 18:26:31.742005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:42.966 [2024-10-08 18:26:31.742162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 40.895 ms, result 0 00:15:42.966 true 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85323 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85323 ']' 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85323 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85323 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.966 killing process with pid 85323 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85323' 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85323 00:15:42.966 18:26:31 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85323 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:48.249 18:26:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:48.249 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:48.249 fio-3.35 00:15:48.249 Starting 1 thread 00:15:52.455 00:15:52.455 test: (groupid=0, jobs=1): err= 0: pid=85475: Tue Oct 8 18:26:40 2024 00:15:52.455 read: IOPS=994, BW=66.1MiB/s (69.3MB/s)(255MiB/3853msec) 00:15:52.455 slat (nsec): min=3975, max=19658, avg=5375.15, stdev=1807.58 00:15:52.455 clat (usec): min=259, max=1198, avg=455.00, stdev=149.01 00:15:52.455 lat (usec): min=263, max=1203, avg=460.38, stdev=149.17 00:15:52.455 clat percentiles (usec): 00:15:52.455 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 330], 00:15:52.455 | 30.00th=[ 343], 40.00th=[ 396], 50.00th=[ 412], 60.00th=[ 461], 00:15:52.455 | 70.00th=[ 494], 80.00th=[ 545], 90.00th=[ 676], 95.00th=[ 807], 00:15:52.455 | 99.00th=[ 898], 99.50th=[ 979], 99.90th=[ 1139], 99.95th=[ 1156], 00:15:52.455 | 99.99th=[ 1205] 00:15:52.455 write: IOPS=1001, BW=66.5MiB/s (69.8MB/s)(256MiB/3849msec); 0 zone resets 00:15:52.455 slat (usec): min=14, max=101, avg=18.74, stdev= 3.53 00:15:52.455 clat (usec): min=300, max=1300, avg=510.18, stdev=169.99 00:15:52.455 lat (usec): min=321, max=1318, avg=528.92, stdev=170.35 00:15:52.455 clat percentiles (usec): 00:15:52.455 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 351], 20.00th=[ 359], 00:15:52.455 | 30.00th=[ 383], 40.00th=[ 445], 50.00th=[ 490], 60.00th=[ 498], 00:15:52.455 | 70.00th=[ 553], 80.00th=[ 619], 90.00th=[ 783], 95.00th=[ 889], 00:15:52.455 | 99.00th=[ 1029], 99.50th=[ 1123], 99.90th=[ 1254], 99.95th=[ 1287], 00:15:52.455 | 99.99th=[ 1303] 00:15:52.455 bw ( KiB/s): min=51680, max=77384, per=98.68%, avg=67222.86, stdev=8240.72, samples=7 00:15:52.455 iops : min= 760, max= 1138, avg=988.57, stdev=121.19, samples=7 00:15:52.455 lat (usec) : 500=65.94%, 750=24.29%, 1000=8.99% 00:15:52.455 lat (msec) : 2=0.78% 00:15:52.455 cpu : usr=99.30%, sys=0.08%, ctx=6, majf=0, minf=1181 00:15:52.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:52.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.455 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:52.455 00:15:52.455 Run status group 0 (all jobs): 00:15:52.455 READ: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=255MiB (267MB), run=3853-3853msec 00:15:52.455 WRITE: bw=66.5MiB/s (69.8MB/s), 66.5MiB/s-66.5MiB/s (69.8MB/s-69.8MB/s), io=256MiB (269MB), run=3849-3849msec 00:15:52.716 ----------------------------------------------------- 00:15:52.717 Suppressions used: 00:15:52.717 count bytes template 00:15:52.717 1 5 /usr/src/fio/parse.c 00:15:52.717 1 8 libtcmalloc_minimal.so 00:15:52.717 1 904 libcrypto.so 00:15:52.717 ----------------------------------------------------- 00:15:52.717 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:52.717 18:26:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:52.982 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:52.982 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:52.982 fio-3.35 00:15:52.982 Starting 2 threads 00:16:19.554 00:16:19.555 first_half: (groupid=0, jobs=1): err= 0: pid=85567: Tue Oct 8 18:27:06 2024 00:16:19.555 read: IOPS=2735, BW=10.7MiB/s (11.2MB/s)(255MiB/23870msec) 00:16:19.555 slat (nsec): min=3078, max=47726, avg=4282.97, stdev=1390.23 00:16:19.555 clat (usec): min=679, max=354008, avg=36796.50, stdev=20953.35 00:16:19.555 lat (usec): min=684, max=354012, avg=36800.79, stdev=20953.49 00:16:19.555 clat percentiles (msec): 00:16:19.555 | 1.00th=[ 14], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:16:19.555 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:16:19.555 | 70.00th=[ 35], 80.00th=[ 38], 90.00th=[ 43], 95.00th=[ 67], 00:16:19.555 | 99.00th=[ 146], 99.50th=[ 180], 99.90th=[ 249], 99.95th=[ 296], 00:16:19.555 | 99.99th=[ 342] 00:16:19.555 write: IOPS=3265, BW=12.8MiB/s (13.4MB/s)(256MiB/20072msec); 0 zone resets 00:16:19.555 slat (usec): min=3, max=1090, avg= 6.21, stdev= 9.58 00:16:19.555 clat (usec): min=340, max=90792, avg=9931.29, stdev=15120.63 00:16:19.555 lat (usec): min=346, max=90798, avg=9937.50, stdev=15120.81 00:16:19.555 clat percentiles (usec): 00:16:19.555 | 1.00th=[ 775], 5.00th=[ 1074], 10.00th=[ 1254], 20.00th=[ 1795], 00:16:19.555 | 30.00th=[ 3228], 40.00th=[ 4555], 50.00th=[ 5407], 60.00th=[ 6718], 00:16:19.555 | 70.00th=[ 9110], 80.00th=[11600], 90.00th=[16057], 95.00th=[43779], 00:16:19.555 | 99.00th=[74974], 99.50th=[79168], 99.90th=[86508], 99.95th=[87557], 00:16:19.555 | 99.99th=[89654] 00:16:19.555 bw ( KiB/s): min= 1664, max=40072, per=99.67%, avg=23831.27, stdev=12932.83, samples=22 00:16:19.555 iops : min= 416, max=10018, avg=5957.82, stdev=3233.21, samples=22 00:16:19.555 lat (usec) : 500=0.03%, 750=0.38%, 1000=1.42% 00:16:19.555 lat (msec) : 2=9.27%, 4=7.18%, 10=19.16%, 20=9.61%, 50=46.84% 00:16:19.555 lat (msec) : 100=5.02%, 250=1.03%, 500=0.04% 00:16:19.555 cpu : usr=99.34%, sys=0.15%, ctx=53, majf=0, minf=5587 00:16:19.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:19.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.555 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.555 issued rwts: total=65308,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.555 second_half: (groupid=0, jobs=1): err= 0: pid=85568: Tue Oct 8 18:27:06 2024 00:16:19.555 read: IOPS=2716, BW=10.6MiB/s (11.1MB/s)(255MiB/24046msec) 00:16:19.555 slat (nsec): min=3046, max=52144, avg=4541.81, stdev=1895.12 00:16:19.555 clat (usec): min=660, max=354667, avg=36229.46, stdev=23738.08 00:16:19.555 lat (usec): min=667, max=354679, avg=36234.00, stdev=23738.20 00:16:19.555 clat percentiles (msec): 00:16:19.555 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:16:19.555 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 33], 00:16:19.555 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 55], 00:16:19.555 | 99.00th=[ 171], 99.50th=[ 201], 99.90th=[ 222], 99.95th=[ 243], 00:16:19.555 | 99.99th=[ 351] 00:16:19.555 write: IOPS=2988, BW=11.7MiB/s (12.2MB/s)(256MiB/21928msec); 0 zone resets 00:16:19.555 slat (usec): min=3, max=1707, avg= 6.59, stdev= 8.30 00:16:19.555 clat (usec): min=377, max=90762, avg=10839.35, stdev=16822.95 00:16:19.555 lat (usec): min=384, max=90769, avg=10845.94, stdev=16823.34 00:16:19.555 clat percentiles (usec): 00:16:19.555 | 1.00th=[ 758], 5.00th=[ 1029], 10.00th=[ 1188], 20.00th=[ 1450], 00:16:19.555 | 30.00th=[ 2057], 40.00th=[ 3130], 50.00th=[ 4555], 60.00th=[ 6194], 00:16:19.555 | 70.00th=[ 8979], 80.00th=[13698], 90.00th=[30278], 95.00th=[59507], 00:16:19.555 | 99.00th=[76022], 99.50th=[80217], 99.90th=[86508], 99.95th=[88605], 00:16:19.555 | 99.99th=[89654] 00:16:19.555 bw ( KiB/s): min= 872, max=55520, per=87.72%, avg=20974.20, stdev=14279.68, samples=25 00:16:19.555 iops : min= 218, max=13880, avg=5243.52, stdev=3569.89, samples=25 00:16:19.555 lat (usec) : 500=0.03%, 750=0.43%, 1000=1.72% 00:16:19.555 lat (msec) : 2=12.58%, 4=8.82%, 10=14.21%, 20=8.08%, 50=48.46% 00:16:19.555 lat (msec) : 100=4.28%, 250=1.38%, 500=0.02% 00:16:19.555 cpu : usr=99.30%, sys=0.10%, ctx=38, majf=0, minf=5549 00:16:19.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:19.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.555 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.555 issued rwts: total=65314,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.555 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.555 00:16:19.555 Run status group 0 (all jobs): 00:16:19.555 READ: bw=21.2MiB/s (22.2MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=510MiB (535MB), run=23870-24046msec 00:16:19.555 WRITE: bw=23.3MiB/s (24.5MB/s), 11.7MiB/s-12.8MiB/s (12.2MB/s-13.4MB/s), io=512MiB (537MB), run=20072-21928msec 00:16:19.555 ----------------------------------------------------- 00:16:19.555 Suppressions used: 00:16:19.555 count bytes template 00:16:19.555 2 10 /usr/src/fio/parse.c 00:16:19.555 4 384 /usr/src/fio/iolog.c 00:16:19.555 1 8 libtcmalloc_minimal.so 00:16:19.555 1 904 libcrypto.so 00:16:19.555 ----------------------------------------------------- 00:16:19.555 00:16:19.555 18:27:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:19.555 18:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:19.555 18:27:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:19.555 18:27:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:19.555 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:19.555 fio-3.35 00:16:19.555 Starting 1 thread 00:16:34.484 00:16:34.484 test: (groupid=0, jobs=1): err= 0: pid=85880: Tue Oct 8 18:27:22 2024 00:16:34.484 read: IOPS=8106, BW=31.7MiB/s (33.2MB/s)(255MiB/8043msec) 00:16:34.484 slat (nsec): min=3020, max=17208, avg=3553.04, stdev=696.94 00:16:34.484 clat (usec): min=483, max=35000, avg=15781.19, stdev=1605.22 00:16:34.484 lat (usec): min=486, max=35004, avg=15784.75, stdev=1605.23 00:16:34.484 clat percentiles (usec): 00:16:34.484 | 1.00th=[14222], 5.00th=[14615], 10.00th=[14746], 20.00th=[15008], 00:16:34.484 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:16:34.484 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16909], 95.00th=[18482], 00:16:34.484 | 99.00th=[23200], 99.50th=[24249], 99.90th=[27395], 99.95th=[31065], 00:16:34.484 | 99.99th=[34341] 00:16:34.484 write: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(256MiB/5686msec); 0 zone resets 00:16:34.484 slat (usec): min=4, max=684, avg= 6.49, stdev= 4.50 00:16:34.484 clat (usec): min=509, max=48053, avg=11059.22, stdev=10614.41 00:16:34.484 lat (usec): min=515, max=48059, avg=11065.71, stdev=10614.62 00:16:34.484 clat percentiles (usec): 00:16:34.484 | 1.00th=[ 668], 5.00th=[ 807], 10.00th=[ 906], 20.00th=[ 1037], 00:16:34.484 | 30.00th=[ 1172], 40.00th=[ 1598], 50.00th=[10290], 60.00th=[12649], 00:16:34.484 | 70.00th=[15401], 80.00th=[17695], 90.00th=[30016], 95.00th=[31589], 00:16:34.484 | 99.00th=[35390], 99.50th=[37487], 99.90th=[41681], 99.95th=[42730], 00:16:34.484 | 99.99th=[45876] 00:16:34.484 bw ( KiB/s): min=19272, max=56448, per=94.77%, avg=43690.67, stdev=12287.28, samples=12 00:16:34.484 iops : min= 4818, max=14112, avg=10922.67, stdev=3071.82, samples=12 00:16:34.484 lat (usec) : 500=0.01%, 750=1.48%, 1000=7.02% 00:16:34.484 lat (msec) : 2=11.99%, 4=0.53%, 10=3.66%, 20=65.28%, 50=10.03% 00:16:34.484 cpu : usr=99.03%, sys=0.20%, ctx=29, majf=0, minf=5577 00:16:34.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:34.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.484 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.484 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.484 00:16:34.484 Run status group 0 (all jobs): 00:16:34.484 READ: bw=31.7MiB/s (33.2MB/s), 31.7MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8043-8043msec 00:16:34.484 WRITE: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=256MiB (268MB), run=5686-5686msec 00:16:35.059 ----------------------------------------------------- 00:16:35.059 Suppressions used: 00:16:35.059 count bytes template 00:16:35.059 1 5 /usr/src/fio/parse.c 00:16:35.059 2 192 /usr/src/fio/iolog.c 00:16:35.059 1 8 libtcmalloc_minimal.so 00:16:35.059 1 904 libcrypto.so 00:16:35.059 ----------------------------------------------------- 00:16:35.059 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:35.059 Remove shared memory files 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70790 /dev/shm/spdk_tgt_trace.pid84266 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:35.059 00:16:35.059 real 0m59.001s 00:16:35.059 user 2m11.723s 00:16:35.059 sys 0m2.848s 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:35.059 18:27:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:35.059 ************************************ 00:16:35.059 END TEST ftl_fio_basic 00:16:35.059 ************************************ 00:16:35.322 18:27:23 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:35.322 18:27:23 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:35.322 18:27:23 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:35.322 18:27:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:35.322 ************************************ 00:16:35.322 START TEST ftl_bdevperf 00:16:35.322 ************************************ 00:16:35.322 18:27:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:35.322 * Looking for test storage... 00:16:35.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:35.322 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:35.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.323 --rc genhtml_branch_coverage=1 00:16:35.323 --rc genhtml_function_coverage=1 00:16:35.323 --rc genhtml_legend=1 00:16:35.323 --rc geninfo_all_blocks=1 00:16:35.323 --rc geninfo_unexecuted_blocks=1 00:16:35.323 00:16:35.323 ' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:35.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.323 --rc genhtml_branch_coverage=1 00:16:35.323 --rc genhtml_function_coverage=1 00:16:35.323 --rc genhtml_legend=1 00:16:35.323 --rc geninfo_all_blocks=1 00:16:35.323 --rc geninfo_unexecuted_blocks=1 00:16:35.323 00:16:35.323 ' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:35.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.323 --rc genhtml_branch_coverage=1 00:16:35.323 --rc genhtml_function_coverage=1 00:16:35.323 --rc genhtml_legend=1 00:16:35.323 --rc geninfo_all_blocks=1 00:16:35.323 --rc geninfo_unexecuted_blocks=1 00:16:35.323 00:16:35.323 ' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:35.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.323 --rc genhtml_branch_coverage=1 00:16:35.323 --rc genhtml_function_coverage=1 00:16:35.323 --rc genhtml_legend=1 00:16:35.323 --rc geninfo_all_blocks=1 00:16:35.323 --rc geninfo_unexecuted_blocks=1 00:16:35.323 00:16:35.323 ' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86114 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86114 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 86114 ']' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:35.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:35.323 18:27:24 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:35.586 [2024-10-08 18:27:24.232989] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:35.586 [2024-10-08 18:27:24.233444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86114 ] 00:16:35.586 [2024-10-08 18:27:24.369300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:35.586 [2024-10-08 18:27:24.383191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.847 [2024-10-08 18:27:24.458871] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:36.418 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:36.679 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:36.940 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:36.940 { 00:16:36.940 "name": "nvme0n1", 00:16:36.940 "aliases": [ 00:16:36.940 "e1fcf2cc-6591-4953-adb2-65331f3a7214" 00:16:36.940 ], 00:16:36.940 "product_name": "NVMe disk", 00:16:36.940 "block_size": 4096, 00:16:36.940 "num_blocks": 1310720, 00:16:36.940 "uuid": "e1fcf2cc-6591-4953-adb2-65331f3a7214", 00:16:36.940 "numa_id": -1, 00:16:36.940 "assigned_rate_limits": { 00:16:36.940 "rw_ios_per_sec": 0, 00:16:36.940 "rw_mbytes_per_sec": 0, 00:16:36.940 "r_mbytes_per_sec": 0, 00:16:36.940 "w_mbytes_per_sec": 0 00:16:36.940 }, 00:16:36.940 "claimed": true, 00:16:36.940 "claim_type": "read_many_write_one", 00:16:36.940 "zoned": false, 00:16:36.940 "supported_io_types": { 00:16:36.940 "read": true, 00:16:36.940 "write": true, 00:16:36.940 "unmap": true, 00:16:36.940 "flush": true, 00:16:36.940 "reset": true, 00:16:36.940 "nvme_admin": true, 00:16:36.940 "nvme_io": true, 00:16:36.940 "nvme_io_md": false, 00:16:36.940 "write_zeroes": true, 00:16:36.940 "zcopy": false, 00:16:36.940 "get_zone_info": false, 00:16:36.940 "zone_management": false, 00:16:36.940 "zone_append": false, 00:16:36.940 "compare": true, 00:16:36.940 "compare_and_write": false, 00:16:36.940 "abort": true, 00:16:36.940 "seek_hole": false, 00:16:36.940 "seek_data": false, 00:16:36.940 "copy": true, 00:16:36.940 "nvme_iov_md": false 00:16:36.940 }, 00:16:36.940 "driver_specific": { 00:16:36.940 "nvme": [ 00:16:36.940 { 00:16:36.940 "pci_address": "0000:00:11.0", 00:16:36.941 "trid": { 00:16:36.941 "trtype": "PCIe", 00:16:36.941 "traddr": "0000:00:11.0" 00:16:36.941 }, 00:16:36.941 "ctrlr_data": { 00:16:36.941 "cntlid": 0, 00:16:36.941 "vendor_id": "0x1b36", 00:16:36.941 "model_number": "QEMU NVMe Ctrl", 00:16:36.941 "serial_number": "12341", 00:16:36.941 "firmware_revision": "8.0.0", 00:16:36.941 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:36.941 "oacs": { 00:16:36.941 "security": 0, 00:16:36.941 "format": 1, 00:16:36.941 "firmware": 0, 00:16:36.941 "ns_manage": 1 00:16:36.941 }, 00:16:36.941 "multi_ctrlr": false, 00:16:36.941 "ana_reporting": false 00:16:36.941 }, 00:16:36.941 "vs": { 00:16:36.941 "nvme_version": "1.4" 00:16:36.941 }, 00:16:36.941 "ns_data": { 00:16:36.941 "id": 1, 00:16:36.941 "can_share": false 00:16:36.941 } 00:16:36.941 } 00:16:36.941 ], 00:16:36.941 "mp_policy": "active_passive" 00:16:36.941 } 00:16:36.941 } 00:16:36.941 ]' 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:36.941 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:37.203 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=8c0b71b0-841c-422f-9a3e-4f46ea08cf58 00:16:37.203 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:37.203 18:27:25 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c0b71b0-841c-422f-9a3e-4f46ea08cf58 00:16:37.464 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:37.726 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=01eba580-8610-442d-8be4-8f49cc13f9bc 00:16:37.726 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 01eba580-8610-442d-8be4-8f49cc13f9bc 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:37.988 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:38.248 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:38.248 { 00:16:38.248 "name": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:38.248 "aliases": [ 00:16:38.248 "lvs/nvme0n1p0" 00:16:38.248 ], 00:16:38.248 "product_name": "Logical Volume", 00:16:38.248 "block_size": 4096, 00:16:38.248 "num_blocks": 26476544, 00:16:38.248 "uuid": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:38.248 "assigned_rate_limits": { 00:16:38.248 "rw_ios_per_sec": 0, 00:16:38.249 "rw_mbytes_per_sec": 0, 00:16:38.249 "r_mbytes_per_sec": 0, 00:16:38.249 "w_mbytes_per_sec": 0 00:16:38.249 }, 00:16:38.249 "claimed": false, 00:16:38.249 "zoned": false, 00:16:38.249 "supported_io_types": { 00:16:38.249 "read": true, 00:16:38.249 "write": true, 00:16:38.249 "unmap": true, 00:16:38.249 "flush": false, 00:16:38.249 "reset": true, 00:16:38.249 "nvme_admin": false, 00:16:38.249 "nvme_io": false, 00:16:38.249 "nvme_io_md": false, 00:16:38.249 "write_zeroes": true, 00:16:38.249 "zcopy": false, 00:16:38.249 "get_zone_info": false, 00:16:38.249 "zone_management": false, 00:16:38.249 "zone_append": false, 00:16:38.249 "compare": false, 00:16:38.249 "compare_and_write": false, 00:16:38.249 "abort": false, 00:16:38.249 "seek_hole": true, 00:16:38.249 "seek_data": true, 00:16:38.249 "copy": false, 00:16:38.249 "nvme_iov_md": false 00:16:38.249 }, 00:16:38.249 "driver_specific": { 00:16:38.249 "lvol": { 00:16:38.249 "lvol_store_uuid": "01eba580-8610-442d-8be4-8f49cc13f9bc", 00:16:38.249 "base_bdev": "nvme0n1", 00:16:38.249 "thin_provision": true, 00:16:38.249 "num_allocated_clusters": 0, 00:16:38.249 "snapshot": false, 00:16:38.249 "clone": false, 00:16:38.249 "esnap_clone": false 00:16:38.249 } 00:16:38.249 } 00:16:38.249 } 00:16:38.249 ]' 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:38.249 18:27:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:38.509 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:38.769 { 00:16:38.769 "name": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:38.769 "aliases": [ 00:16:38.769 "lvs/nvme0n1p0" 00:16:38.769 ], 00:16:38.769 "product_name": "Logical Volume", 00:16:38.769 "block_size": 4096, 00:16:38.769 "num_blocks": 26476544, 00:16:38.769 "uuid": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:38.769 "assigned_rate_limits": { 00:16:38.769 "rw_ios_per_sec": 0, 00:16:38.769 "rw_mbytes_per_sec": 0, 00:16:38.769 "r_mbytes_per_sec": 0, 00:16:38.769 "w_mbytes_per_sec": 0 00:16:38.769 }, 00:16:38.769 "claimed": false, 00:16:38.769 "zoned": false, 00:16:38.769 "supported_io_types": { 00:16:38.769 "read": true, 00:16:38.769 "write": true, 00:16:38.769 "unmap": true, 00:16:38.769 "flush": false, 00:16:38.769 "reset": true, 00:16:38.769 "nvme_admin": false, 00:16:38.769 "nvme_io": false, 00:16:38.769 "nvme_io_md": false, 00:16:38.769 "write_zeroes": true, 00:16:38.769 "zcopy": false, 00:16:38.769 "get_zone_info": false, 00:16:38.769 "zone_management": false, 00:16:38.769 "zone_append": false, 00:16:38.769 "compare": false, 00:16:38.769 "compare_and_write": false, 00:16:38.769 "abort": false, 00:16:38.769 "seek_hole": true, 00:16:38.769 "seek_data": true, 00:16:38.769 "copy": false, 00:16:38.769 "nvme_iov_md": false 00:16:38.769 }, 00:16:38.769 "driver_specific": { 00:16:38.769 "lvol": { 00:16:38.769 "lvol_store_uuid": "01eba580-8610-442d-8be4-8f49cc13f9bc", 00:16:38.769 "base_bdev": "nvme0n1", 00:16:38.769 "thin_provision": true, 00:16:38.769 "num_allocated_clusters": 0, 00:16:38.769 "snapshot": false, 00:16:38.769 "clone": false, 00:16:38.769 "esnap_clone": false 00:16:38.769 } 00:16:38.769 } 00:16:38.769 } 00:16:38.769 ]' 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:38.769 18:27:27 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:39.029 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7d6dfafa-11f2-41fe-a755-d40f55cc67de 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:39.288 { 00:16:39.288 "name": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:39.288 "aliases": [ 00:16:39.288 "lvs/nvme0n1p0" 00:16:39.288 ], 00:16:39.288 "product_name": "Logical Volume", 00:16:39.288 "block_size": 4096, 00:16:39.288 "num_blocks": 26476544, 00:16:39.288 "uuid": "7d6dfafa-11f2-41fe-a755-d40f55cc67de", 00:16:39.288 "assigned_rate_limits": { 00:16:39.288 "rw_ios_per_sec": 0, 00:16:39.288 "rw_mbytes_per_sec": 0, 00:16:39.288 "r_mbytes_per_sec": 0, 00:16:39.288 "w_mbytes_per_sec": 0 00:16:39.288 }, 00:16:39.288 "claimed": false, 00:16:39.288 "zoned": false, 00:16:39.288 "supported_io_types": { 00:16:39.288 "read": true, 00:16:39.288 "write": true, 00:16:39.288 "unmap": true, 00:16:39.288 "flush": false, 00:16:39.288 "reset": true, 00:16:39.288 "nvme_admin": false, 00:16:39.288 "nvme_io": false, 00:16:39.288 "nvme_io_md": false, 00:16:39.288 "write_zeroes": true, 00:16:39.288 "zcopy": false, 00:16:39.288 "get_zone_info": false, 00:16:39.288 "zone_management": false, 00:16:39.288 "zone_append": false, 00:16:39.288 "compare": false, 00:16:39.288 "compare_and_write": false, 00:16:39.288 "abort": false, 00:16:39.288 "seek_hole": true, 00:16:39.288 "seek_data": true, 00:16:39.288 "copy": false, 00:16:39.288 "nvme_iov_md": false 00:16:39.288 }, 00:16:39.288 "driver_specific": { 00:16:39.288 "lvol": { 00:16:39.288 "lvol_store_uuid": "01eba580-8610-442d-8be4-8f49cc13f9bc", 00:16:39.288 "base_bdev": "nvme0n1", 00:16:39.288 "thin_provision": true, 00:16:39.288 "num_allocated_clusters": 0, 00:16:39.288 "snapshot": false, 00:16:39.288 "clone": false, 00:16:39.288 "esnap_clone": false 00:16:39.288 } 00:16:39.288 } 00:16:39.288 } 00:16:39.288 ]' 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:39.288 18:27:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:39.288 18:27:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:39.288 18:27:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7d6dfafa-11f2-41fe-a755-d40f55cc67de -c nvc0n1p0 --l2p_dram_limit 20 00:16:39.550 [2024-10-08 18:27:28.193429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.550 [2024-10-08 18:27:28.193518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:39.550 [2024-10-08 18:27:28.193541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:39.550 [2024-10-08 18:27:28.193554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.550 [2024-10-08 18:27:28.193626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.550 [2024-10-08 18:27:28.193642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:39.550 [2024-10-08 18:27:28.193652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:39.550 [2024-10-08 18:27:28.193662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.550 [2024-10-08 18:27:28.193681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:39.551 [2024-10-08 18:27:28.194052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:39.551 [2024-10-08 18:27:28.194072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.194083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:39.551 [2024-10-08 18:27:28.194100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:16:39.551 [2024-10-08 18:27:28.194110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.194150] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 018db934-5b6c-4448-9377-90e2716ed795 00:16:39.551 [2024-10-08 18:27:28.196383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.196631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:39.551 [2024-10-08 18:27:28.196658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:39.551 [2024-10-08 18:27:28.196668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.205736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.205913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:39.551 [2024-10-08 18:27:28.205982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.009 ms 00:16:39.551 [2024-10-08 18:27:28.206006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.206134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.206163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:39.551 [2024-10-08 18:27:28.206252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:16:39.551 [2024-10-08 18:27:28.206278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.206362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.206389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:39.551 [2024-10-08 18:27:28.206414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:39.551 [2024-10-08 18:27:28.206485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.206544] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:39.551 [2024-10-08 18:27:28.208893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.209064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:39.551 [2024-10-08 18:27:28.209130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.365 ms 00:16:39.551 [2024-10-08 18:27:28.209158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.209215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.209247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:39.551 [2024-10-08 18:27:28.209270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:16:39.551 [2024-10-08 18:27:28.209294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.209326] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:39.551 [2024-10-08 18:27:28.209528] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:39.551 [2024-10-08 18:27:28.209673] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:39.551 [2024-10-08 18:27:28.209714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:39.551 [2024-10-08 18:27:28.209766] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:39.551 [2024-10-08 18:27:28.209804] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:39.551 [2024-10-08 18:27:28.209836] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:39.551 [2024-10-08 18:27:28.209857] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:39.551 [2024-10-08 18:27:28.209879] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:39.551 [2024-10-08 18:27:28.209970] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:39.551 [2024-10-08 18:27:28.209995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.210026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:39.551 [2024-10-08 18:27:28.210053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:16:39.551 [2024-10-08 18:27:28.210078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.210182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.551 [2024-10-08 18:27:28.210212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:39.551 [2024-10-08 18:27:28.210234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:16:39.551 [2024-10-08 18:27:28.210255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.551 [2024-10-08 18:27:28.210383] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:39.551 [2024-10-08 18:27:28.210529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:39.551 [2024-10-08 18:27:28.210551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.551 [2024-10-08 18:27:28.210579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.210599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:39.551 [2024-10-08 18:27:28.210621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.210649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:39.551 [2024-10-08 18:27:28.210672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:39.551 [2024-10-08 18:27:28.210691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:39.551 [2024-10-08 18:27:28.210797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.551 [2024-10-08 18:27:28.210823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:39.551 [2024-10-08 18:27:28.210846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:39.551 [2024-10-08 18:27:28.210866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:39.551 [2024-10-08 18:27:28.210887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:39.551 [2024-10-08 18:27:28.210906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:39.551 [2024-10-08 18:27:28.210928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.210950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:39.551 [2024-10-08 18:27:28.210972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:39.551 [2024-10-08 18:27:28.210991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:39.551 [2024-10-08 18:27:28.211032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.551 [2024-10-08 18:27:28.211137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:39.551 [2024-10-08 18:27:28.211164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.551 [2024-10-08 18:27:28.211206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:39.551 [2024-10-08 18:27:28.211227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.551 [2024-10-08 18:27:28.211274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:39.551 [2024-10-08 18:27:28.211296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:39.551 [2024-10-08 18:27:28.211337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:39.551 [2024-10-08 18:27:28.211355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.551 [2024-10-08 18:27:28.211452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:39.551 [2024-10-08 18:27:28.211474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:39.551 [2024-10-08 18:27:28.211494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:39.551 [2024-10-08 18:27:28.211515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:39.551 [2024-10-08 18:27:28.211537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:39.551 [2024-10-08 18:27:28.211600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:39.551 [2024-10-08 18:27:28.211645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:39.551 [2024-10-08 18:27:28.211665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211688] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:39.551 [2024-10-08 18:27:28.211710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:39.551 [2024-10-08 18:27:28.211733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:39.551 [2024-10-08 18:27:28.211806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:39.551 [2024-10-08 18:27:28.211838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:39.551 [2024-10-08 18:27:28.211857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:39.551 [2024-10-08 18:27:28.211879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:39.551 [2024-10-08 18:27:28.211900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:39.551 [2024-10-08 18:27:28.211921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:39.551 [2024-10-08 18:27:28.211941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:39.551 [2024-10-08 18:27:28.212019] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:39.551 [2024-10-08 18:27:28.212057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.551 [2024-10-08 18:27:28.212093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:39.551 [2024-10-08 18:27:28.212125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:39.551 [2024-10-08 18:27:28.212158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:39.551 [2024-10-08 18:27:28.212227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:39.551 [2024-10-08 18:27:28.212263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:39.552 [2024-10-08 18:27:28.212294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:39.552 [2024-10-08 18:27:28.212372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:39.552 [2024-10-08 18:27:28.212404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:39.552 [2024-10-08 18:27:28.212436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:39.552 [2024-10-08 18:27:28.212501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.212779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.212919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.212955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.212985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:39.552 [2024-10-08 18:27:28.213020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:39.552 [2024-10-08 18:27:28.213052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.213139] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:39.552 [2024-10-08 18:27:28.213644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:39.552 [2024-10-08 18:27:28.213677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:39.552 [2024-10-08 18:27:28.213687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:39.552 [2024-10-08 18:27:28.213706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:39.552 [2024-10-08 18:27:28.213724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:39.552 [2024-10-08 18:27:28.213737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.384 ms 00:16:39.552 [2024-10-08 18:27:28.213772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:39.552 [2024-10-08 18:27:28.213833] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:39.552 [2024-10-08 18:27:28.213847] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:43.777 [2024-10-08 18:27:32.116345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.116591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:43.777 [2024-10-08 18:27:32.116618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3902.497 ms 00:16:43.777 [2024-10-08 18:27:32.116634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.138727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.138803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:43.777 [2024-10-08 18:27:32.138839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.010 ms 00:16:43.777 [2024-10-08 18:27:32.138852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.138994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.139008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:43.777 [2024-10-08 18:27:32.139024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:16:43.777 [2024-10-08 18:27:32.139035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.149436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.149481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:43.777 [2024-10-08 18:27:32.149499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.324 ms 00:16:43.777 [2024-10-08 18:27:32.149506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.149533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.149542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:43.777 [2024-10-08 18:27:32.149552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:43.777 [2024-10-08 18:27:32.149562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.150009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.150029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:43.777 [2024-10-08 18:27:32.150043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:16:43.777 [2024-10-08 18:27:32.150051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.150163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.150211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:43.777 [2024-10-08 18:27:32.150224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:16:43.777 [2024-10-08 18:27:32.150232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.155884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.155915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:43.777 [2024-10-08 18:27:32.155927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.628 ms 00:16:43.777 [2024-10-08 18:27:32.155935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.165034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:43.777 [2024-10-08 18:27:32.171039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.171073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:43.777 [2024-10-08 18:27:32.171084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.052 ms 00:16:43.777 [2024-10-08 18:27:32.171094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.226174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.226230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:43.777 [2024-10-08 18:27:32.226243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.059 ms 00:16:43.777 [2024-10-08 18:27:32.226254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.226495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.226516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:43.777 [2024-10-08 18:27:32.226526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:16:43.777 [2024-10-08 18:27:32.226535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.777 [2024-10-08 18:27:32.229793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.777 [2024-10-08 18:27:32.229826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:43.778 [2024-10-08 18:27:32.229836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.237 ms 00:16:43.778 [2024-10-08 18:27:32.229846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.232580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.232725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:43.778 [2024-10-08 18:27:32.232742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:16:43.778 [2024-10-08 18:27:32.232763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.233070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.233088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:43.778 [2024-10-08 18:27:32.233096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:16:43.778 [2024-10-08 18:27:32.233106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.260827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.260868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:43.778 [2024-10-08 18:27:32.260879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.705 ms 00:16:43.778 [2024-10-08 18:27:32.260889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.265096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.265134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:43.778 [2024-10-08 18:27:32.265148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:16:43.778 [2024-10-08 18:27:32.265159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.268012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.268047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:43.778 [2024-10-08 18:27:32.268057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.821 ms 00:16:43.778 [2024-10-08 18:27:32.268068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.271328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.271470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:43.778 [2024-10-08 18:27:32.271486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:16:43.778 [2024-10-08 18:27:32.271495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.271528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.271540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:43.778 [2024-10-08 18:27:32.271551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:43.778 [2024-10-08 18:27:32.271563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.271653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:43.778 [2024-10-08 18:27:32.271665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:43.778 [2024-10-08 18:27:32.271674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:43.778 [2024-10-08 18:27:32.271683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:43.778 [2024-10-08 18:27:32.272649] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4078.822 ms, result 0 00:16:43.778 { 00:16:43.778 "name": "ftl0", 00:16:43.778 "uuid": "018db934-5b6c-4448-9377-90e2716ed795" 00:16:43.778 } 00:16:43.778 18:27:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:43.778 18:27:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:43.778 18:27:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:43.778 18:27:32 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:43.778 [2024-10-08 18:27:32.574113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:43.778 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:43.778 Zero copy mechanism will not be used. 00:16:43.778 Running I/O for 4 seconds... 00:16:46.084 2775.00 IOPS, 184.28 MiB/s [2024-10-08T18:27:35.865Z] 2904.00 IOPS, 192.84 MiB/s [2024-10-08T18:27:36.793Z] 2896.67 IOPS, 192.36 MiB/s [2024-10-08T18:27:36.793Z] 3000.00 IOPS, 199.22 MiB/s 00:16:47.943 Latency(us) 00:16:47.943 [2024-10-08T18:27:36.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.943 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:47.944 ftl0 : 4.00 2999.08 199.16 0.00 0.00 350.53 148.09 2092.11 00:16:47.944 [2024-10-08T18:27:36.794Z] =================================================================================================================== 00:16:47.944 [2024-10-08T18:27:36.794Z] Total : 2999.08 199.16 0.00 0.00 350.53 148.09 2092.11 00:16:47.944 [2024-10-08 18:27:36.581788] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:47.944 { 00:16:47.944 "results": [ 00:16:47.944 { 00:16:47.944 "job": "ftl0", 00:16:47.944 "core_mask": "0x1", 00:16:47.944 "workload": "randwrite", 00:16:47.944 "status": "finished", 00:16:47.944 "queue_depth": 1, 00:16:47.944 "io_size": 69632, 00:16:47.944 "runtime": 4.001558, 00:16:47.944 "iops": 2999.0818576164584, 00:16:47.944 "mibps": 199.15777960734295, 00:16:47.944 "io_failed": 0, 00:16:47.944 "io_timeout": 0, 00:16:47.944 "avg_latency_us": 350.52573516309536, 00:16:47.944 "min_latency_us": 148.08615384615385, 00:16:47.944 "max_latency_us": 2092.110769230769 00:16:47.944 } 00:16:47.944 ], 00:16:47.944 "core_count": 1 00:16:47.944 } 00:16:47.944 18:27:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:47.944 [2024-10-08 18:27:36.686400] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:47.944 Running I/O for 4 seconds... 00:16:50.305 10569.00 IOPS, 41.29 MiB/s [2024-10-08T18:27:39.725Z] 8294.00 IOPS, 32.40 MiB/s [2024-10-08T18:27:41.110Z] 7402.33 IOPS, 28.92 MiB/s [2024-10-08T18:27:41.110Z] 7284.00 IOPS, 28.45 MiB/s 00:16:52.260 Latency(us) 00:16:52.260 [2024-10-08T18:27:41.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.260 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:52.260 ftl0 : 4.03 7260.49 28.36 0.00 0.00 17558.41 247.34 132281.90 00:16:52.260 [2024-10-08T18:27:41.110Z] =================================================================================================================== 00:16:52.260 [2024-10-08T18:27:41.110Z] Total : 7260.49 28.36 0.00 0.00 17558.41 0.00 132281.90 00:16:52.260 { 00:16:52.260 "results": [ 00:16:52.260 { 00:16:52.260 "job": "ftl0", 00:16:52.260 "core_mask": "0x1", 00:16:52.260 "workload": "randwrite", 00:16:52.260 "status": "finished", 00:16:52.260 "queue_depth": 128, 00:16:52.260 "io_size": 4096, 00:16:52.260 "runtime": 4.029753, 00:16:52.260 "iops": 7260.494625849276, 00:16:52.261 "mibps": 28.361307132223736, 00:16:52.261 "io_failed": 0, 00:16:52.261 "io_timeout": 0, 00:16:52.261 "avg_latency_us": 17558.41054344111, 00:16:52.261 "min_latency_us": 247.3353846153846, 00:16:52.261 "max_latency_us": 132281.89538461537 00:16:52.261 } 00:16:52.261 ], 00:16:52.261 "core_count": 1 00:16:52.261 } 00:16:52.261 [2024-10-08 18:27:40.723263] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:52.261 18:27:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:52.261 [2024-10-08 18:27:40.832746] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:52.261 Running I/O for 4 seconds... 00:16:54.138 6139.00 IOPS, 23.98 MiB/s [2024-10-08T18:27:43.924Z] 5783.00 IOPS, 22.59 MiB/s [2024-10-08T18:27:44.859Z] 5547.00 IOPS, 21.67 MiB/s [2024-10-08T18:27:44.859Z] 5684.50 IOPS, 22.21 MiB/s 00:16:56.009 Latency(us) 00:16:56.009 [2024-10-08T18:27:44.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.009 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:56.009 Verification LBA range: start 0x0 length 0x1400000 00:16:56.009 ftl0 : 4.01 5698.26 22.26 0.00 0.00 22403.07 269.39 105664.20 00:16:56.009 [2024-10-08T18:27:44.859Z] =================================================================================================================== 00:16:56.009 [2024-10-08T18:27:44.859Z] Total : 5698.26 22.26 0.00 0.00 22403.07 0.00 105664.20 00:16:56.009 [2024-10-08 18:27:44.852327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:56.009 { 00:16:56.009 "results": [ 00:16:56.009 { 00:16:56.009 "job": "ftl0", 00:16:56.009 "core_mask": "0x1", 00:16:56.009 "workload": "verify", 00:16:56.009 "status": "finished", 00:16:56.009 "verify_range": { 00:16:56.009 "start": 0, 00:16:56.009 "length": 20971520 00:16:56.009 }, 00:16:56.009 "queue_depth": 128, 00:16:56.009 "io_size": 4096, 00:16:56.009 "runtime": 4.011748, 00:16:56.009 "iops": 5698.264198050326, 00:16:56.009 "mibps": 22.258844523634085, 00:16:56.009 "io_failed": 0, 00:16:56.009 "io_timeout": 0, 00:16:56.009 "avg_latency_us": 22403.06752055993, 00:16:56.009 "min_latency_us": 269.39076923076925, 00:16:56.009 "max_latency_us": 105664.19692307692 00:16:56.009 } 00:16:56.009 ], 00:16:56.009 "core_count": 1 00:16:56.009 } 00:16:56.268 18:27:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:56.268 [2024-10-08 18:27:45.056677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.268 [2024-10-08 18:27:45.056734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:56.268 [2024-10-08 18:27:45.056749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:56.268 [2024-10-08 18:27:45.056777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.268 [2024-10-08 18:27:45.056804] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:56.268 [2024-10-08 18:27:45.057432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.268 [2024-10-08 18:27:45.057486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:56.268 [2024-10-08 18:27:45.057500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:16:56.268 [2024-10-08 18:27:45.057513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.268 [2024-10-08 18:27:45.060160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.268 [2024-10-08 18:27:45.060194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:56.268 [2024-10-08 18:27:45.060210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.615 ms 00:16:56.268 [2024-10-08 18:27:45.060219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.250916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.250976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:56.531 [2024-10-08 18:27:45.250995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 190.670 ms 00:16:56.531 [2024-10-08 18:27:45.251004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.257244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.257277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:56.531 [2024-10-08 18:27:45.257298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:16:56.531 [2024-10-08 18:27:45.257306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.260257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.260296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:56.531 [2024-10-08 18:27:45.260309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:16:56.531 [2024-10-08 18:27:45.260317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.266180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.266276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:56.531 [2024-10-08 18:27:45.266315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.807 ms 00:16:56.531 [2024-10-08 18:27:45.266332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.266572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.266593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:56.531 [2024-10-08 18:27:45.266613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:16:56.531 [2024-10-08 18:27:45.266628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.269935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.269996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:56.531 [2024-10-08 18:27:45.270018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.269 ms 00:16:56.531 [2024-10-08 18:27:45.270033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.272875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.272928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:56.531 [2024-10-08 18:27:45.272949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:16:56.531 [2024-10-08 18:27:45.272963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.275069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.275102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:56.531 [2024-10-08 18:27:45.275115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.049 ms 00:16:56.531 [2024-10-08 18:27:45.275122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.277139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.531 [2024-10-08 18:27:45.277172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:56.531 [2024-10-08 18:27:45.277183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:16:56.531 [2024-10-08 18:27:45.277190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.531 [2024-10-08 18:27:45.277222] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:56.531 [2024-10-08 18:27:45.277236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:56.531 [2024-10-08 18:27:45.277604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.277998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:56.532 [2024-10-08 18:27:45.278178] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:56.532 [2024-10-08 18:27:45.278188] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 018db934-5b6c-4448-9377-90e2716ed795 00:16:56.532 [2024-10-08 18:27:45.278198] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:56.532 [2024-10-08 18:27:45.278207] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:56.532 [2024-10-08 18:27:45.278215] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:56.532 [2024-10-08 18:27:45.278226] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:56.532 [2024-10-08 18:27:45.278239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:56.532 [2024-10-08 18:27:45.278248] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:56.532 [2024-10-08 18:27:45.278256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:56.532 [2024-10-08 18:27:45.278264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:56.532 [2024-10-08 18:27:45.278271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:56.532 [2024-10-08 18:27:45.278280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.532 [2024-10-08 18:27:45.278288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:56.532 [2024-10-08 18:27:45.278298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.060 ms 00:16:56.532 [2024-10-08 18:27:45.278311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.280079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.532 [2024-10-08 18:27:45.280219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:56.532 [2024-10-08 18:27:45.280238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.748 ms 00:16:56.532 [2024-10-08 18:27:45.280246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.280359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.532 [2024-10-08 18:27:45.280369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:56.532 [2024-10-08 18:27:45.280383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:16:56.532 [2024-10-08 18:27:45.280390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.285913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.532 [2024-10-08 18:27:45.285950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.532 [2024-10-08 18:27:45.285962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.532 [2024-10-08 18:27:45.285970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.286033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.532 [2024-10-08 18:27:45.286042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.532 [2024-10-08 18:27:45.286052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.532 [2024-10-08 18:27:45.286059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.286113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.532 [2024-10-08 18:27:45.286122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.532 [2024-10-08 18:27:45.286132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.532 [2024-10-08 18:27:45.286143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.532 [2024-10-08 18:27:45.286164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.286172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.533 [2024-10-08 18:27:45.286183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.286191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.296665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.296711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.533 [2024-10-08 18:27:45.296727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.296735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.305730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.305991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.533 [2024-10-08 18:27:45.306012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:56.533 [2024-10-08 18:27:45.306160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:56.533 [2024-10-08 18:27:45.306223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:56.533 [2024-10-08 18:27:45.306325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:56.533 [2024-10-08 18:27:45.306383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:56.533 [2024-10-08 18:27:45.306453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:56.533 [2024-10-08 18:27:45.306526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:56.533 [2024-10-08 18:27:45.306539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:56.533 [2024-10-08 18:27:45.306547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.533 [2024-10-08 18:27:45.306691] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.966 ms, result 0 00:16:56.533 true 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86114 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 86114 ']' 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 86114 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86114 00:16:56.533 killing process with pid 86114 00:16:56.533 Received shutdown signal, test time was about 4.000000 seconds 00:16:56.533 00:16:56.533 Latency(us) 00:16:56.533 [2024-10-08T18:27:45.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.533 [2024-10-08T18:27:45.383Z] =================================================================================================================== 00:16:56.533 [2024-10-08T18:27:45.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86114' 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 86114 00:16:56.533 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 86114 00:16:57.102 Remove shared memory files 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:16:57.102 ************************************ 00:16:57.102 END TEST ftl_bdevperf 00:16:57.102 ************************************ 00:16:57.102 00:16:57.102 real 0m21.784s 00:16:57.102 user 0m24.472s 00:16:57.102 sys 0m1.060s 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.102 18:27:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 18:27:45 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:57.102 18:27:45 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:57.102 18:27:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.102 18:27:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:57.102 ************************************ 00:16:57.102 START TEST ftl_trim 00:16:57.102 ************************************ 00:16:57.102 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:16:57.102 * Looking for test storage... 00:16:57.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.102 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:57.102 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:16:57.102 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.363 18:27:45 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:57.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.363 --rc genhtml_branch_coverage=1 00:16:57.363 --rc genhtml_function_coverage=1 00:16:57.363 --rc genhtml_legend=1 00:16:57.363 --rc geninfo_all_blocks=1 00:16:57.363 --rc geninfo_unexecuted_blocks=1 00:16:57.363 00:16:57.363 ' 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:57.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.363 --rc genhtml_branch_coverage=1 00:16:57.363 --rc genhtml_function_coverage=1 00:16:57.363 --rc genhtml_legend=1 00:16:57.363 --rc geninfo_all_blocks=1 00:16:57.363 --rc geninfo_unexecuted_blocks=1 00:16:57.363 00:16:57.363 ' 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:57.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.363 --rc genhtml_branch_coverage=1 00:16:57.363 --rc genhtml_function_coverage=1 00:16:57.363 --rc genhtml_legend=1 00:16:57.363 --rc geninfo_all_blocks=1 00:16:57.363 --rc geninfo_unexecuted_blocks=1 00:16:57.363 00:16:57.363 ' 00:16:57.363 18:27:45 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:57.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.363 --rc genhtml_branch_coverage=1 00:16:57.364 --rc genhtml_function_coverage=1 00:16:57.364 --rc genhtml_legend=1 00:16:57.364 --rc geninfo_all_blocks=1 00:16:57.364 --rc geninfo_unexecuted_blocks=1 00:16:57.364 00:16:57.364 ' 00:16:57.364 18:27:45 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:57.364 18:27:45 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:57.364 18:27:45 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.364 18:27:45 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:57.364 18:27:45 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86464 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:57.364 18:27:46 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86464 00:16:57.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86464 ']' 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.364 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:16:57.364 [2024-10-08 18:27:46.109385] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:16:57.364 [2024-10-08 18:27:46.109583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86464 ] 00:16:57.623 [2024-10-08 18:27:46.247832] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:57.623 [2024-10-08 18:27:46.267649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.623 [2024-10-08 18:27:46.343146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.623 [2024-10-08 18:27:46.343504] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.623 [2024-10-08 18:27:46.343517] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.191 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.191 18:27:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:16:58.191 18:27:46 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:58.761 { 00:16:58.761 "name": "nvme0n1", 00:16:58.761 "aliases": [ 00:16:58.761 "58572336-dd04-4929-a08e-5d8d6a02cc7c" 00:16:58.761 ], 00:16:58.761 "product_name": "NVMe disk", 00:16:58.761 "block_size": 4096, 00:16:58.761 "num_blocks": 1310720, 00:16:58.761 "uuid": "58572336-dd04-4929-a08e-5d8d6a02cc7c", 00:16:58.761 "numa_id": -1, 00:16:58.761 "assigned_rate_limits": { 00:16:58.761 "rw_ios_per_sec": 0, 00:16:58.761 "rw_mbytes_per_sec": 0, 00:16:58.761 "r_mbytes_per_sec": 0, 00:16:58.761 "w_mbytes_per_sec": 0 00:16:58.761 }, 00:16:58.761 "claimed": true, 00:16:58.761 "claim_type": "read_many_write_one", 00:16:58.761 "zoned": false, 00:16:58.761 "supported_io_types": { 00:16:58.761 "read": true, 00:16:58.761 "write": true, 00:16:58.761 "unmap": true, 00:16:58.761 "flush": true, 00:16:58.761 "reset": true, 00:16:58.761 "nvme_admin": true, 00:16:58.761 "nvme_io": true, 00:16:58.761 "nvme_io_md": false, 00:16:58.761 "write_zeroes": true, 00:16:58.761 "zcopy": false, 00:16:58.761 "get_zone_info": false, 00:16:58.761 "zone_management": false, 00:16:58.761 "zone_append": false, 00:16:58.761 "compare": true, 00:16:58.761 "compare_and_write": false, 00:16:58.761 "abort": true, 00:16:58.761 "seek_hole": false, 00:16:58.761 "seek_data": false, 00:16:58.761 "copy": true, 00:16:58.761 "nvme_iov_md": false 00:16:58.761 }, 00:16:58.761 "driver_specific": { 00:16:58.761 "nvme": [ 00:16:58.761 { 00:16:58.761 "pci_address": "0000:00:11.0", 00:16:58.761 "trid": { 00:16:58.761 "trtype": "PCIe", 00:16:58.761 "traddr": "0000:00:11.0" 00:16:58.761 }, 00:16:58.761 "ctrlr_data": { 00:16:58.761 "cntlid": 0, 00:16:58.761 "vendor_id": "0x1b36", 00:16:58.761 "model_number": "QEMU NVMe Ctrl", 00:16:58.761 "serial_number": "12341", 00:16:58.761 "firmware_revision": "8.0.0", 00:16:58.761 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:58.761 "oacs": { 00:16:58.761 "security": 0, 00:16:58.761 "format": 1, 00:16:58.761 "firmware": 0, 00:16:58.761 "ns_manage": 1 00:16:58.761 }, 00:16:58.761 "multi_ctrlr": false, 00:16:58.761 "ana_reporting": false 00:16:58.761 }, 00:16:58.761 "vs": { 00:16:58.761 "nvme_version": "1.4" 00:16:58.761 }, 00:16:58.761 "ns_data": { 00:16:58.761 "id": 1, 00:16:58.761 "can_share": false 00:16:58.761 } 00:16:58.761 } 00:16:58.761 ], 00:16:58.761 "mp_policy": "active_passive" 00:16:58.761 } 00:16:58.761 } 00:16:58.761 ]' 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:58.761 18:27:47 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:58.761 18:27:47 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:16:59.020 18:27:47 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:59.020 18:27:47 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:59.020 18:27:47 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=01eba580-8610-442d-8be4-8f49cc13f9bc 00:16:59.020 18:27:47 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:16:59.020 18:27:47 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 01eba580-8610-442d-8be4-8f49cc13f9bc 00:16:59.279 18:27:48 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:59.541 18:27:48 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=ff1af36e-dd8f-4474-8ee3-b53e96158702 00:16:59.541 18:27:48 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ff1af36e-dd8f-4474-8ee3-b53e96158702 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=e4034373-6a71-4de2-a814-00e8437b82cc 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e4034373-6a71-4de2-a814-00e8437b82cc 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=e4034373-6a71-4de2-a814-00e8437b82cc 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:16:59.800 18:27:48 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size e4034373-6a71-4de2-a814-00e8437b82cc 00:16:59.800 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4034373-6a71-4de2-a814-00e8437b82cc 00:16:59.800 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:59.800 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:16:59.800 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:16:59.800 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:00.062 { 00:17:00.062 "name": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:00.062 "aliases": [ 00:17:00.062 "lvs/nvme0n1p0" 00:17:00.062 ], 00:17:00.062 "product_name": "Logical Volume", 00:17:00.062 "block_size": 4096, 00:17:00.062 "num_blocks": 26476544, 00:17:00.062 "uuid": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:00.062 "assigned_rate_limits": { 00:17:00.062 "rw_ios_per_sec": 0, 00:17:00.062 "rw_mbytes_per_sec": 0, 00:17:00.062 "r_mbytes_per_sec": 0, 00:17:00.062 "w_mbytes_per_sec": 0 00:17:00.062 }, 00:17:00.062 "claimed": false, 00:17:00.062 "zoned": false, 00:17:00.062 "supported_io_types": { 00:17:00.062 "read": true, 00:17:00.062 "write": true, 00:17:00.062 "unmap": true, 00:17:00.062 "flush": false, 00:17:00.062 "reset": true, 00:17:00.062 "nvme_admin": false, 00:17:00.062 "nvme_io": false, 00:17:00.062 "nvme_io_md": false, 00:17:00.062 "write_zeroes": true, 00:17:00.062 "zcopy": false, 00:17:00.062 "get_zone_info": false, 00:17:00.062 "zone_management": false, 00:17:00.062 "zone_append": false, 00:17:00.062 "compare": false, 00:17:00.062 "compare_and_write": false, 00:17:00.062 "abort": false, 00:17:00.062 "seek_hole": true, 00:17:00.062 "seek_data": true, 00:17:00.062 "copy": false, 00:17:00.062 "nvme_iov_md": false 00:17:00.062 }, 00:17:00.062 "driver_specific": { 00:17:00.062 "lvol": { 00:17:00.062 "lvol_store_uuid": "ff1af36e-dd8f-4474-8ee3-b53e96158702", 00:17:00.062 "base_bdev": "nvme0n1", 00:17:00.062 "thin_provision": true, 00:17:00.062 "num_allocated_clusters": 0, 00:17:00.062 "snapshot": false, 00:17:00.062 "clone": false, 00:17:00.062 "esnap_clone": false 00:17:00.062 } 00:17:00.062 } 00:17:00.062 } 00:17:00.062 ]' 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:00.062 18:27:48 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:00.062 18:27:48 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:00.062 18:27:48 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:00.062 18:27:48 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:00.320 18:27:49 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:00.320 18:27:49 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:00.320 18:27:49 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.320 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.320 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:00.320 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:00.320 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:00.320 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.579 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:00.579 { 00:17:00.579 "name": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:00.579 "aliases": [ 00:17:00.579 "lvs/nvme0n1p0" 00:17:00.579 ], 00:17:00.579 "product_name": "Logical Volume", 00:17:00.579 "block_size": 4096, 00:17:00.580 "num_blocks": 26476544, 00:17:00.580 "uuid": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:00.580 "assigned_rate_limits": { 00:17:00.580 "rw_ios_per_sec": 0, 00:17:00.580 "rw_mbytes_per_sec": 0, 00:17:00.580 "r_mbytes_per_sec": 0, 00:17:00.580 "w_mbytes_per_sec": 0 00:17:00.580 }, 00:17:00.580 "claimed": false, 00:17:00.580 "zoned": false, 00:17:00.580 "supported_io_types": { 00:17:00.580 "read": true, 00:17:00.580 "write": true, 00:17:00.580 "unmap": true, 00:17:00.580 "flush": false, 00:17:00.580 "reset": true, 00:17:00.580 "nvme_admin": false, 00:17:00.580 "nvme_io": false, 00:17:00.580 "nvme_io_md": false, 00:17:00.580 "write_zeroes": true, 00:17:00.580 "zcopy": false, 00:17:00.580 "get_zone_info": false, 00:17:00.580 "zone_management": false, 00:17:00.580 "zone_append": false, 00:17:00.580 "compare": false, 00:17:00.580 "compare_and_write": false, 00:17:00.580 "abort": false, 00:17:00.580 "seek_hole": true, 00:17:00.580 "seek_data": true, 00:17:00.580 "copy": false, 00:17:00.580 "nvme_iov_md": false 00:17:00.580 }, 00:17:00.580 "driver_specific": { 00:17:00.580 "lvol": { 00:17:00.580 "lvol_store_uuid": "ff1af36e-dd8f-4474-8ee3-b53e96158702", 00:17:00.580 "base_bdev": "nvme0n1", 00:17:00.580 "thin_provision": true, 00:17:00.580 "num_allocated_clusters": 0, 00:17:00.580 "snapshot": false, 00:17:00.580 "clone": false, 00:17:00.580 "esnap_clone": false 00:17:00.580 } 00:17:00.580 } 00:17:00.580 } 00:17:00.580 ]' 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:00.580 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:00.580 18:27:49 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:00.580 18:27:49 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:00.842 18:27:49 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:00.842 18:27:49 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:00.842 18:27:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.842 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=e4034373-6a71-4de2-a814-00e8437b82cc 00:17:00.842 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:00.842 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:00.842 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:00.842 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e4034373-6a71-4de2-a814-00e8437b82cc 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:01.104 { 00:17:01.104 "name": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:01.104 "aliases": [ 00:17:01.104 "lvs/nvme0n1p0" 00:17:01.104 ], 00:17:01.104 "product_name": "Logical Volume", 00:17:01.104 "block_size": 4096, 00:17:01.104 "num_blocks": 26476544, 00:17:01.104 "uuid": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:01.104 "assigned_rate_limits": { 00:17:01.104 "rw_ios_per_sec": 0, 00:17:01.104 "rw_mbytes_per_sec": 0, 00:17:01.104 "r_mbytes_per_sec": 0, 00:17:01.104 "w_mbytes_per_sec": 0 00:17:01.104 }, 00:17:01.104 "claimed": false, 00:17:01.104 "zoned": false, 00:17:01.104 "supported_io_types": { 00:17:01.104 "read": true, 00:17:01.104 "write": true, 00:17:01.104 "unmap": true, 00:17:01.104 "flush": false, 00:17:01.104 "reset": true, 00:17:01.104 "nvme_admin": false, 00:17:01.104 "nvme_io": false, 00:17:01.104 "nvme_io_md": false, 00:17:01.104 "write_zeroes": true, 00:17:01.104 "zcopy": false, 00:17:01.104 "get_zone_info": false, 00:17:01.104 "zone_management": false, 00:17:01.104 "zone_append": false, 00:17:01.104 "compare": false, 00:17:01.104 "compare_and_write": false, 00:17:01.104 "abort": false, 00:17:01.104 "seek_hole": true, 00:17:01.104 "seek_data": true, 00:17:01.104 "copy": false, 00:17:01.104 "nvme_iov_md": false 00:17:01.104 }, 00:17:01.104 "driver_specific": { 00:17:01.104 "lvol": { 00:17:01.104 "lvol_store_uuid": "ff1af36e-dd8f-4474-8ee3-b53e96158702", 00:17:01.104 "base_bdev": "nvme0n1", 00:17:01.104 "thin_provision": true, 00:17:01.104 "num_allocated_clusters": 0, 00:17:01.104 "snapshot": false, 00:17:01.104 "clone": false, 00:17:01.104 "esnap_clone": false 00:17:01.104 } 00:17:01.104 } 00:17:01.104 } 00:17:01.104 ]' 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:01.104 18:27:49 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:01.104 18:27:49 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:01.104 18:27:49 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e4034373-6a71-4de2-a814-00e8437b82cc -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:01.369 [2024-10-08 18:27:50.019290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.019357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:01.369 [2024-10-08 18:27:50.019376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:01.369 [2024-10-08 18:27:50.019386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.022492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.022548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:01.369 [2024-10-08 18:27:50.022565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:17:01.369 [2024-10-08 18:27:50.022573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.022780] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:01.369 [2024-10-08 18:27:50.023085] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:01.369 [2024-10-08 18:27:50.023119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.023129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:01.369 [2024-10-08 18:27:50.023141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:17:01.369 [2024-10-08 18:27:50.023163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.023326] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:01.369 [2024-10-08 18:27:50.025343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.025624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:01.369 [2024-10-08 18:27:50.025654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:01.369 [2024-10-08 18:27:50.025668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.035399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.035448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:01.369 [2024-10-08 18:27:50.035460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.488 ms 00:17:01.369 [2024-10-08 18:27:50.035478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.035654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.035675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:01.369 [2024-10-08 18:27:50.035703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:01.369 [2024-10-08 18:27:50.035715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.035831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.035849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:01.369 [2024-10-08 18:27:50.035859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:01.369 [2024-10-08 18:27:50.035869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.035935] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:01.369 [2024-10-08 18:27:50.038258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.038306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:01.369 [2024-10-08 18:27:50.038320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:17:01.369 [2024-10-08 18:27:50.038329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.038411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.038423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:01.369 [2024-10-08 18:27:50.038437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:01.369 [2024-10-08 18:27:50.038447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.038518] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:01.369 [2024-10-08 18:27:50.038686] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:01.369 [2024-10-08 18:27:50.038703] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:01.369 [2024-10-08 18:27:50.038715] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:01.369 [2024-10-08 18:27:50.038728] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:01.369 [2024-10-08 18:27:50.038739] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:01.369 [2024-10-08 18:27:50.038771] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:01.369 [2024-10-08 18:27:50.038780] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:01.369 [2024-10-08 18:27:50.038793] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:01.369 [2024-10-08 18:27:50.038816] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:01.369 [2024-10-08 18:27:50.038826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.038834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:01.369 [2024-10-08 18:27:50.038861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:01.369 [2024-10-08 18:27:50.038869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.038989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.369 [2024-10-08 18:27:50.039012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:01.369 [2024-10-08 18:27:50.039023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:01.369 [2024-10-08 18:27:50.039032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.369 [2024-10-08 18:27:50.039225] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:01.369 [2024-10-08 18:27:50.039236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:01.369 [2024-10-08 18:27:50.039248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:01.369 [2024-10-08 18:27:50.039278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:01.369 [2024-10-08 18:27:50.039306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.369 [2024-10-08 18:27:50.039321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:01.369 [2024-10-08 18:27:50.039329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:01.369 [2024-10-08 18:27:50.039342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:01.369 [2024-10-08 18:27:50.039350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:01.369 [2024-10-08 18:27:50.039360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:01.369 [2024-10-08 18:27:50.039367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:01.369 [2024-10-08 18:27:50.039384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:01.369 [2024-10-08 18:27:50.039424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:01.369 [2024-10-08 18:27:50.039449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:01.369 [2024-10-08 18:27:50.039477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:01.369 [2024-10-08 18:27:50.039485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.369 [2024-10-08 18:27:50.039497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:01.370 [2024-10-08 18:27:50.039503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:01.370 [2024-10-08 18:27:50.039513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:01.370 [2024-10-08 18:27:50.039521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:01.370 [2024-10-08 18:27:50.039531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:01.370 [2024-10-08 18:27:50.039537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.370 [2024-10-08 18:27:50.039546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:01.370 [2024-10-08 18:27:50.039555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:01.370 [2024-10-08 18:27:50.039564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:01.370 [2024-10-08 18:27:50.039571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:01.370 [2024-10-08 18:27:50.039580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:01.370 [2024-10-08 18:27:50.039589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.370 [2024-10-08 18:27:50.039599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:01.370 [2024-10-08 18:27:50.039605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:01.370 [2024-10-08 18:27:50.039615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.370 [2024-10-08 18:27:50.039625] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:01.370 [2024-10-08 18:27:50.039659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:01.370 [2024-10-08 18:27:50.039672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:01.370 [2024-10-08 18:27:50.039687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:01.370 [2024-10-08 18:27:50.039696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:01.370 [2024-10-08 18:27:50.039705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:01.370 [2024-10-08 18:27:50.039713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:01.370 [2024-10-08 18:27:50.039722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:01.370 [2024-10-08 18:27:50.039729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:01.370 [2024-10-08 18:27:50.039737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:01.370 [2024-10-08 18:27:50.040060] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:01.370 [2024-10-08 18:27:50.040120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.040153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:01.370 [2024-10-08 18:27:50.040189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:01.370 [2024-10-08 18:27:50.040219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:01.370 [2024-10-08 18:27:50.040250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:01.370 [2024-10-08 18:27:50.040280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:01.370 [2024-10-08 18:27:50.040314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:01.370 [2024-10-08 18:27:50.040344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:01.370 [2024-10-08 18:27:50.040375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:01.370 [2024-10-08 18:27:50.040404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:01.370 [2024-10-08 18:27:50.040435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.040465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.040496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.041050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.041156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:01.370 [2024-10-08 18:27:50.041235] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:01.370 [2024-10-08 18:27:50.041274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.041337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:01.370 [2024-10-08 18:27:50.041370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:01.370 [2024-10-08 18:27:50.041400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:01.370 [2024-10-08 18:27:50.041498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:01.370 [2024-10-08 18:27:50.041536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:01.370 [2024-10-08 18:27:50.041567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:01.370 [2024-10-08 18:27:50.041593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.379 ms 00:17:01.370 [2024-10-08 18:27:50.041657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:01.370 [2024-10-08 18:27:50.041874] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:01.370 [2024-10-08 18:27:50.041955] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:04.666 [2024-10-08 18:27:52.820609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.820849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:04.666 [2024-10-08 18:27:52.820872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2778.726 ms 00:17:04.666 [2024-10-08 18:27:52.820896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.841866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.841925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:04.666 [2024-10-08 18:27:52.841943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.854 ms 00:17:04.666 [2024-10-08 18:27:52.841959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.842142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.842175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:04.666 [2024-10-08 18:27:52.842187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:04.666 [2024-10-08 18:27:52.842200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.853240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.853280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:04.666 [2024-10-08 18:27:52.853291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.003 ms 00:17:04.666 [2024-10-08 18:27:52.853301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.853390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.853402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:04.666 [2024-10-08 18:27:52.853411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.666 [2024-10-08 18:27:52.853421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.853882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.853903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:04.666 [2024-10-08 18:27:52.853913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:04.666 [2024-10-08 18:27:52.853937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.854071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.854087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:04.666 [2024-10-08 18:27:52.854096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:04.666 [2024-10-08 18:27:52.854108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.860963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.860998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:04.666 [2024-10-08 18:27:52.861008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.824 ms 00:17:04.666 [2024-10-08 18:27:52.861018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.666 [2024-10-08 18:27:52.870090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:04.666 [2024-10-08 18:27:52.887314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.666 [2024-10-08 18:27:52.887466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:04.667 [2024-10-08 18:27:52.887486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.214 ms 00:17:04.667 [2024-10-08 18:27:52.887494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.945017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.945067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:04.667 [2024-10-08 18:27:52.945086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.445 ms 00:17:04.667 [2024-10-08 18:27:52.945095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.945303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.945315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:04.667 [2024-10-08 18:27:52.945329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:17:04.667 [2024-10-08 18:27:52.945338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.949666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.949834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:04.667 [2024-10-08 18:27:52.949857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.293 ms 00:17:04.667 [2024-10-08 18:27:52.949866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.952684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.952717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:04.667 [2024-10-08 18:27:52.952731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.773 ms 00:17:04.667 [2024-10-08 18:27:52.952739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.953112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.953128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:04.667 [2024-10-08 18:27:52.953144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:17:04.667 [2024-10-08 18:27:52.953152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.979874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.980002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:04.667 [2024-10-08 18:27:52.980023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.692 ms 00:17:04.667 [2024-10-08 18:27:52.980032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.984161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.984196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:04.667 [2024-10-08 18:27:52.984212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.071 ms 00:17:04.667 [2024-10-08 18:27:52.984221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.987092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.987208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:04.667 [2024-10-08 18:27:52.987226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:17:04.667 [2024-10-08 18:27:52.987247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.990852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.990887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:04.667 [2024-10-08 18:27:52.990902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:17:04.667 [2024-10-08 18:27:52.990910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.990963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.990973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:04.667 [2024-10-08 18:27:52.990986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.667 [2024-10-08 18:27:52.990993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.991084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.667 [2024-10-08 18:27:52.991102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:04.667 [2024-10-08 18:27:52.991112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:04.667 [2024-10-08 18:27:52.991120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.667 [2024-10-08 18:27:52.992063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.667 [2024-10-08 18:27:52.993087] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2972.551 ms, result 0 00:17:04.667 [2024-10-08 18:27:52.993716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:04.667 { 00:17:04.667 "name": "ftl0", 00:17:04.667 "uuid": "6d5ed988-dd07-46b0-9a05-0af5ae35e0f7" 00:17:04.667 } 00:17:04.667 18:27:53 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:04.667 [ 00:17:04.667 { 00:17:04.667 "name": "ftl0", 00:17:04.667 "aliases": [ 00:17:04.667 "6d5ed988-dd07-46b0-9a05-0af5ae35e0f7" 00:17:04.667 ], 00:17:04.667 "product_name": "FTL disk", 00:17:04.667 "block_size": 4096, 00:17:04.667 "num_blocks": 23592960, 00:17:04.667 "uuid": "6d5ed988-dd07-46b0-9a05-0af5ae35e0f7", 00:17:04.667 "assigned_rate_limits": { 00:17:04.667 "rw_ios_per_sec": 0, 00:17:04.667 "rw_mbytes_per_sec": 0, 00:17:04.667 "r_mbytes_per_sec": 0, 00:17:04.667 "w_mbytes_per_sec": 0 00:17:04.667 }, 00:17:04.667 "claimed": false, 00:17:04.667 "zoned": false, 00:17:04.667 "supported_io_types": { 00:17:04.667 "read": true, 00:17:04.667 "write": true, 00:17:04.667 "unmap": true, 00:17:04.667 "flush": true, 00:17:04.667 "reset": false, 00:17:04.667 "nvme_admin": false, 00:17:04.667 "nvme_io": false, 00:17:04.667 "nvme_io_md": false, 00:17:04.667 "write_zeroes": true, 00:17:04.667 "zcopy": false, 00:17:04.667 "get_zone_info": false, 00:17:04.667 "zone_management": false, 00:17:04.667 "zone_append": false, 00:17:04.667 "compare": false, 00:17:04.667 "compare_and_write": false, 00:17:04.667 "abort": false, 00:17:04.667 "seek_hole": false, 00:17:04.667 "seek_data": false, 00:17:04.667 "copy": false, 00:17:04.667 "nvme_iov_md": false 00:17:04.667 }, 00:17:04.667 "driver_specific": { 00:17:04.667 "ftl": { 00:17:04.667 "base_bdev": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:04.667 "cache": "nvc0n1p0" 00:17:04.667 } 00:17:04.667 } 00:17:04.667 } 00:17:04.667 ] 00:17:04.667 18:27:53 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:04.667 18:27:53 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:04.667 18:27:53 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:04.928 18:27:53 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:04.928 18:27:53 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:05.242 18:27:53 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:05.242 { 00:17:05.242 "name": "ftl0", 00:17:05.242 "aliases": [ 00:17:05.242 "6d5ed988-dd07-46b0-9a05-0af5ae35e0f7" 00:17:05.242 ], 00:17:05.242 "product_name": "FTL disk", 00:17:05.242 "block_size": 4096, 00:17:05.242 "num_blocks": 23592960, 00:17:05.242 "uuid": "6d5ed988-dd07-46b0-9a05-0af5ae35e0f7", 00:17:05.242 "assigned_rate_limits": { 00:17:05.242 "rw_ios_per_sec": 0, 00:17:05.242 "rw_mbytes_per_sec": 0, 00:17:05.242 "r_mbytes_per_sec": 0, 00:17:05.242 "w_mbytes_per_sec": 0 00:17:05.242 }, 00:17:05.242 "claimed": false, 00:17:05.242 "zoned": false, 00:17:05.242 "supported_io_types": { 00:17:05.242 "read": true, 00:17:05.242 "write": true, 00:17:05.242 "unmap": true, 00:17:05.242 "flush": true, 00:17:05.242 "reset": false, 00:17:05.242 "nvme_admin": false, 00:17:05.242 "nvme_io": false, 00:17:05.242 "nvme_io_md": false, 00:17:05.242 "write_zeroes": true, 00:17:05.242 "zcopy": false, 00:17:05.242 "get_zone_info": false, 00:17:05.242 "zone_management": false, 00:17:05.242 "zone_append": false, 00:17:05.242 "compare": false, 00:17:05.242 "compare_and_write": false, 00:17:05.242 "abort": false, 00:17:05.242 "seek_hole": false, 00:17:05.242 "seek_data": false, 00:17:05.242 "copy": false, 00:17:05.242 "nvme_iov_md": false 00:17:05.242 }, 00:17:05.242 "driver_specific": { 00:17:05.242 "ftl": { 00:17:05.242 "base_bdev": "e4034373-6a71-4de2-a814-00e8437b82cc", 00:17:05.242 "cache": "nvc0n1p0" 00:17:05.242 } 00:17:05.242 } 00:17:05.242 } 00:17:05.242 ]' 00:17:05.242 18:27:53 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:05.242 18:27:53 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:05.242 18:27:53 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:05.242 [2024-10-08 18:27:54.049963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.242 [2024-10-08 18:27:54.050034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:05.242 [2024-10-08 18:27:54.050055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:05.242 [2024-10-08 18:27:54.050074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.242 [2024-10-08 18:27:54.050117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:05.242 [2024-10-08 18:27:54.050713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.242 [2024-10-08 18:27:54.050772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:05.242 [2024-10-08 18:27:54.050791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:17:05.242 [2024-10-08 18:27:54.050804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.242 [2024-10-08 18:27:54.051421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.242 [2024-10-08 18:27:54.051448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:05.242 [2024-10-08 18:27:54.051469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:17:05.243 [2024-10-08 18:27:54.051484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.243 [2024-10-08 18:27:54.056992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.243 [2024-10-08 18:27:54.057146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:05.243 [2024-10-08 18:27:54.057170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.470 ms 00:17:05.243 [2024-10-08 18:27:54.057182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.243 [2024-10-08 18:27:54.066423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.243 [2024-10-08 18:27:54.066462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:05.243 [2024-10-08 18:27:54.066478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.153 ms 00:17:05.243 [2024-10-08 18:27:54.066488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.243 [2024-10-08 18:27:54.069230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.243 [2024-10-08 18:27:54.069269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:05.243 [2024-10-08 18:27:54.069283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.646 ms 00:17:05.243 [2024-10-08 18:27:54.069291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.074480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.074516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:05.503 [2024-10-08 18:27:54.074542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.138 ms 00:17:05.503 [2024-10-08 18:27:54.074551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.074728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.074740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:05.503 [2024-10-08 18:27:54.074779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:17:05.503 [2024-10-08 18:27:54.074787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.077289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.077322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:05.503 [2024-10-08 18:27:54.077336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.471 ms 00:17:05.503 [2024-10-08 18:27:54.077343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.079018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.079048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:05.503 [2024-10-08 18:27:54.079059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:17:05.503 [2024-10-08 18:27:54.079066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.080831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.080875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:05.503 [2024-10-08 18:27:54.080896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:17:05.503 [2024-10-08 18:27:54.080913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.082761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.503 [2024-10-08 18:27:54.082791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:05.503 [2024-10-08 18:27:54.082802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:17:05.503 [2024-10-08 18:27:54.082809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.503 [2024-10-08 18:27:54.082868] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:05.503 [2024-10-08 18:27:54.082883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.082991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:05.503 [2024-10-08 18:27:54.083258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:05.504 [2024-10-08 18:27:54.083790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:05.504 [2024-10-08 18:27:54.083801] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:05.504 [2024-10-08 18:27:54.083809] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:05.504 [2024-10-08 18:27:54.083820] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:05.504 [2024-10-08 18:27:54.083828] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:05.504 [2024-10-08 18:27:54.083837] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:05.504 [2024-10-08 18:27:54.083844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:05.504 [2024-10-08 18:27:54.083856] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:05.504 [2024-10-08 18:27:54.083873] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:05.504 [2024-10-08 18:27:54.083882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:05.504 [2024-10-08 18:27:54.083889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:05.504 [2024-10-08 18:27:54.083898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.504 [2024-10-08 18:27:54.083906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:05.504 [2024-10-08 18:27:54.083927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:17:05.504 [2024-10-08 18:27:54.083943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.085892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.504 [2024-10-08 18:27:54.085915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:05.504 [2024-10-08 18:27:54.085927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:17:05.504 [2024-10-08 18:27:54.085938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.086044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.504 [2024-10-08 18:27:54.086052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:05.504 [2024-10-08 18:27:54.086075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:05.504 [2024-10-08 18:27:54.086083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.092600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.504 [2024-10-08 18:27:54.092631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.504 [2024-10-08 18:27:54.092646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.504 [2024-10-08 18:27:54.092655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.092768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.504 [2024-10-08 18:27:54.092779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.504 [2024-10-08 18:27:54.092792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.504 [2024-10-08 18:27:54.092799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.092856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.504 [2024-10-08 18:27:54.092865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.504 [2024-10-08 18:27:54.092875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.504 [2024-10-08 18:27:54.092884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.504 [2024-10-08 18:27:54.092915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.504 [2024-10-08 18:27:54.092923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.504 [2024-10-08 18:27:54.092932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.504 [2024-10-08 18:27:54.092939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.105026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.105070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.505 [2024-10-08 18:27:54.105084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.105095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.114949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.505 [2024-10-08 18:27:54.115152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:05.505 [2024-10-08 18:27:54.115260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:05.505 [2024-10-08 18:27:54.115343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:05.505 [2024-10-08 18:27:54.115468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:05.505 [2024-10-08 18:27:54.115557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:05.505 [2024-10-08 18:27:54.115637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:05.505 [2024-10-08 18:27:54.115736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:05.505 [2024-10-08 18:27:54.115746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:05.505 [2024-10-08 18:27:54.115775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.505 [2024-10-08 18:27:54.115971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.005 ms, result 0 00:17:05.505 true 00:17:05.505 18:27:54 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86464 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86464 ']' 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86464 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86464 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.505 killing process with pid 86464 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86464' 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86464 00:17:05.505 18:27:54 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86464 00:17:10.791 18:27:59 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:11.735 65536+0 records in 00:17:11.735 65536+0 records out 00:17:11.735 268435456 bytes (268 MB, 256 MiB) copied, 1.10178 s, 244 MB/s 00:17:11.735 18:28:00 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:11.735 [2024-10-08 18:28:00.395824] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:11.735 [2024-10-08 18:28:00.395974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86637 ] 00:17:11.735 [2024-10-08 18:28:00.529139] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.735 [2024-10-08 18:28:00.550813] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.996 [2024-10-08 18:28:00.625806] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.996 [2024-10-08 18:28:00.785611] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.996 [2024-10-08 18:28:00.786047] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:12.259 [2024-10-08 18:28:00.949501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.949841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:12.259 [2024-10-08 18:28:00.949925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:12.259 [2024-10-08 18:28:00.949988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.952778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.952937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.259 [2024-10-08 18:28:00.953002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:17:12.259 [2024-10-08 18:28:00.953052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.953957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:12.259 [2024-10-08 18:28:00.954672] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:12.259 [2024-10-08 18:28:00.954848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.954899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.259 [2024-10-08 18:28:00.954956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:17:12.259 [2024-10-08 18:28:00.955006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.957433] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:12.259 [2024-10-08 18:28:00.962513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.962657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:12.259 [2024-10-08 18:28:00.962711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:17:12.259 [2024-10-08 18:28:00.962788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.963253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.963405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:12.259 [2024-10-08 18:28:00.963478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:17:12.259 [2024-10-08 18:28:00.963531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.975102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.975235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.259 [2024-10-08 18:28:00.975294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.470 ms 00:17:12.259 [2024-10-08 18:28:00.975305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.975462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.975475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.259 [2024-10-08 18:28:00.975486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:12.259 [2024-10-08 18:28:00.975494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.975527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.975540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:12.259 [2024-10-08 18:28:00.975549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:12.259 [2024-10-08 18:28:00.975557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.975585] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:12.259 [2024-10-08 18:28:00.978332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.978380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.259 [2024-10-08 18:28:00.978392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:17:12.259 [2024-10-08 18:28:00.978409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.978465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.978484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:12.259 [2024-10-08 18:28:00.978494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:12.259 [2024-10-08 18:28:00.978504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.978530] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:12.259 [2024-10-08 18:28:00.978556] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:12.259 [2024-10-08 18:28:00.978600] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:12.259 [2024-10-08 18:28:00.978619] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:12.259 [2024-10-08 18:28:00.978736] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:12.259 [2024-10-08 18:28:00.978764] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:12.259 [2024-10-08 18:28:00.978777] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:12.259 [2024-10-08 18:28:00.978795] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:12.259 [2024-10-08 18:28:00.978806] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:12.259 [2024-10-08 18:28:00.978816] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:12.259 [2024-10-08 18:28:00.978826] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:12.259 [2024-10-08 18:28:00.978836] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:12.259 [2024-10-08 18:28:00.978846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:12.259 [2024-10-08 18:28:00.978862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.978875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:12.259 [2024-10-08 18:28:00.978884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:17:12.259 [2024-10-08 18:28:00.978896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.978986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.259 [2024-10-08 18:28:00.978997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:12.259 [2024-10-08 18:28:00.979010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:12.259 [2024-10-08 18:28:00.979017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.259 [2024-10-08 18:28:00.979118] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:12.259 [2024-10-08 18:28:00.979135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:12.259 [2024-10-08 18:28:00.979144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.259 [2024-10-08 18:28:00.979159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.259 [2024-10-08 18:28:00.979167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:12.259 [2024-10-08 18:28:00.979173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:12.259 [2024-10-08 18:28:00.979189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:12.259 [2024-10-08 18:28:00.979199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:12.260 [2024-10-08 18:28:00.979206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.260 [2024-10-08 18:28:00.979220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:12.260 [2024-10-08 18:28:00.979227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:12.260 [2024-10-08 18:28:00.979235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.260 [2024-10-08 18:28:00.979246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:12.260 [2024-10-08 18:28:00.979253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:12.260 [2024-10-08 18:28:00.979260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:12.260 [2024-10-08 18:28:00.979274] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:12.260 [2024-10-08 18:28:00.979296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:12.260 [2024-10-08 18:28:00.979321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:12.260 [2024-10-08 18:28:00.979342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:12.260 [2024-10-08 18:28:00.979364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:12.260 [2024-10-08 18:28:00.979385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.260 [2024-10-08 18:28:00.979399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:12.260 [2024-10-08 18:28:00.979407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:12.260 [2024-10-08 18:28:00.979414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.260 [2024-10-08 18:28:00.979421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:12.260 [2024-10-08 18:28:00.979428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:12.260 [2024-10-08 18:28:00.979439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:12.260 [2024-10-08 18:28:00.979453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:12.260 [2024-10-08 18:28:00.979460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979467] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:12.260 [2024-10-08 18:28:00.979475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:12.260 [2024-10-08 18:28:00.979484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.260 [2024-10-08 18:28:00.979500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:12.260 [2024-10-08 18:28:00.979507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:12.260 [2024-10-08 18:28:00.979515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:12.260 [2024-10-08 18:28:00.979522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:12.260 [2024-10-08 18:28:00.979529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:12.260 [2024-10-08 18:28:00.979536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:12.260 [2024-10-08 18:28:00.979545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:12.260 [2024-10-08 18:28:00.979554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:12.260 [2024-10-08 18:28:00.979574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:12.260 [2024-10-08 18:28:00.979583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:12.260 [2024-10-08 18:28:00.979589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:12.260 [2024-10-08 18:28:00.979597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:12.260 [2024-10-08 18:28:00.979604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:12.260 [2024-10-08 18:28:00.979611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:12.260 [2024-10-08 18:28:00.979618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:12.260 [2024-10-08 18:28:00.979625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:12.260 [2024-10-08 18:28:00.979634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:12.260 [2024-10-08 18:28:00.979670] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:12.260 [2024-10-08 18:28:00.979678] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:12.260 [2024-10-08 18:28:00.979699] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:12.260 [2024-10-08 18:28:00.979706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:12.260 [2024-10-08 18:28:00.979713] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:12.260 [2024-10-08 18:28:00.979721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:00.979732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:12.260 [2024-10-08 18:28:00.979744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:17:12.260 [2024-10-08 18:28:00.979767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.007767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.007834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:12.260 [2024-10-08 18:28:01.007850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.907 ms 00:17:12.260 [2024-10-08 18:28:01.007861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.008028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.008042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:12.260 [2024-10-08 18:28:01.008059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:12.260 [2024-10-08 18:28:01.008068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.024050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.024094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:12.260 [2024-10-08 18:28:01.024107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.948 ms 00:17:12.260 [2024-10-08 18:28:01.024116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.024203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.024215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:12.260 [2024-10-08 18:28:01.024227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:12.260 [2024-10-08 18:28:01.024236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.024959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.025002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:12.260 [2024-10-08 18:28:01.025015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:17:12.260 [2024-10-08 18:28:01.025035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.025206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.025225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:12.260 [2024-10-08 18:28:01.025234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:12.260 [2024-10-08 18:28:01.025247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.035584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.035635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:12.260 [2024-10-08 18:28:01.035646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.306 ms 00:17:12.260 [2024-10-08 18:28:01.035662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.040201] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:12.260 [2024-10-08 18:28:01.040266] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:12.260 [2024-10-08 18:28:01.040289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.260 [2024-10-08 18:28:01.040299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:12.260 [2024-10-08 18:28:01.040309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.462 ms 00:17:12.260 [2024-10-08 18:28:01.040318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.260 [2024-10-08 18:28:01.056613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.261 [2024-10-08 18:28:01.056669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:12.261 [2024-10-08 18:28:01.056682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.212 ms 00:17:12.261 [2024-10-08 18:28:01.056691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.261 [2024-10-08 18:28:01.059872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.261 [2024-10-08 18:28:01.059923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:12.261 [2024-10-08 18:28:01.059934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:17:12.261 [2024-10-08 18:28:01.059943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.261 [2024-10-08 18:28:01.062800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.261 [2024-10-08 18:28:01.062860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:12.261 [2024-10-08 18:28:01.062871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.800 ms 00:17:12.261 [2024-10-08 18:28:01.062878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.261 [2024-10-08 18:28:01.063264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.261 [2024-10-08 18:28:01.063285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:12.261 [2024-10-08 18:28:01.063302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:17:12.261 [2024-10-08 18:28:01.063318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.261 [2024-10-08 18:28:01.092504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.261 [2024-10-08 18:28:01.092591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:12.261 [2024-10-08 18:28:01.092608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.156 ms 00:17:12.261 [2024-10-08 18:28:01.092619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.261 [2024-10-08 18:28:01.101576] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:12.563 [2024-10-08 18:28:01.126344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.126411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:12.563 [2024-10-08 18:28:01.126429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.614 ms 00:17:12.563 [2024-10-08 18:28:01.126439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.126566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.126580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:12.563 [2024-10-08 18:28:01.126591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:12.563 [2024-10-08 18:28:01.126600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.126681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.126692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:12.563 [2024-10-08 18:28:01.126702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:17:12.563 [2024-10-08 18:28:01.126711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.126744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.126782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:12.563 [2024-10-08 18:28:01.126793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:12.563 [2024-10-08 18:28:01.126801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.126849] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:12.563 [2024-10-08 18:28:01.126861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.126874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:12.563 [2024-10-08 18:28:01.126884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:12.563 [2024-10-08 18:28:01.126894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.133658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.133717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:12.563 [2024-10-08 18:28:01.133730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.738 ms 00:17:12.563 [2024-10-08 18:28:01.133739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.133864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.563 [2024-10-08 18:28:01.133877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:12.563 [2024-10-08 18:28:01.133891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:12.563 [2024-10-08 18:28:01.133899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.563 [2024-10-08 18:28:01.135852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.563 [2024-10-08 18:28:01.137329] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 185.967 ms, result 0 00:17:12.563 [2024-10-08 18:28:01.138575] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:12.563 [2024-10-08 18:28:01.146118] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:13.527  [2024-10-08T18:28:03.320Z] Copying: 12/256 [MB] (12 MBps) [2024-10-08T18:28:04.263Z] Copying: 36/256 [MB] (23 MBps) [2024-10-08T18:28:05.205Z] Copying: 73/256 [MB] (37 MBps) [2024-10-08T18:28:06.150Z] Copying: 101/256 [MB] (27 MBps) [2024-10-08T18:28:07.536Z] Copying: 113028/262144 [kB] (9432 kBps) [2024-10-08T18:28:08.481Z] Copying: 123/256 [MB] (12 MBps) [2024-10-08T18:28:09.423Z] Copying: 141/256 [MB] (17 MBps) [2024-10-08T18:28:10.367Z] Copying: 163/256 [MB] (22 MBps) [2024-10-08T18:28:11.309Z] Copying: 179/256 [MB] (15 MBps) [2024-10-08T18:28:12.250Z] Copying: 193392/262144 [kB] (9684 kBps) [2024-10-08T18:28:13.193Z] Copying: 201/256 [MB] (13 MBps) [2024-10-08T18:28:14.583Z] Copying: 217/256 [MB] (15 MBps) [2024-10-08T18:28:15.156Z] Copying: 228/256 [MB] (10 MBps) [2024-10-08T18:28:16.544Z] Copying: 239/256 [MB] (10 MBps) [2024-10-08T18:28:17.120Z] Copying: 254448/262144 [kB] (9664 kBps) [2024-10-08T18:28:17.120Z] Copying: 256/256 [MB] (average 16 MBps)[2024-10-08 18:28:16.863650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.270 [2024-10-08 18:28:16.866123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.866179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.270 [2024-10-08 18:28:16.866197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.270 [2024-10-08 18:28:16.866212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.866237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:28.270 [2024-10-08 18:28:16.867196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.867239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.270 [2024-10-08 18:28:16.867253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:17:28.270 [2024-10-08 18:28:16.867263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.870954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.871003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.270 [2024-10-08 18:28:16.871015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.663 ms 00:17:28.270 [2024-10-08 18:28:16.871025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.880782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.880841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.270 [2024-10-08 18:28:16.880853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.738 ms 00:17:28.270 [2024-10-08 18:28:16.880862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.887823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.887871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.270 [2024-10-08 18:28:16.887883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:17:28.270 [2024-10-08 18:28:16.887893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.891319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.891370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.270 [2024-10-08 18:28:16.891381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:17:28.270 [2024-10-08 18:28:16.891390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.896790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.896840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.270 [2024-10-08 18:28:16.896863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.353 ms 00:17:28.270 [2024-10-08 18:28:16.896876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.897016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.897039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.270 [2024-10-08 18:28:16.897048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:17:28.270 [2024-10-08 18:28:16.897057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.900605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.900656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.270 [2024-10-08 18:28:16.900666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.529 ms 00:17:28.270 [2024-10-08 18:28:16.900675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.903998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.904047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.270 [2024-10-08 18:28:16.904057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:17:28.270 [2024-10-08 18:28:16.904065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.906863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.906914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.270 [2024-10-08 18:28:16.906923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.751 ms 00:17:28.270 [2024-10-08 18:28:16.906931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.909653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.270 [2024-10-08 18:28:16.909701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.270 [2024-10-08 18:28:16.909711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:17:28.270 [2024-10-08 18:28:16.909720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.270 [2024-10-08 18:28:16.909778] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.270 [2024-10-08 18:28:16.909804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.909996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.270 [2024-10-08 18:28:16.910003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.271 [2024-10-08 18:28:16.910645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.271 [2024-10-08 18:28:16.910667] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:28.271 [2024-10-08 18:28:16.910678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.271 [2024-10-08 18:28:16.910686] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.271 [2024-10-08 18:28:16.910694] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.271 [2024-10-08 18:28:16.910702] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.271 [2024-10-08 18:28:16.910711] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.271 [2024-10-08 18:28:16.910721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.271 [2024-10-08 18:28:16.910737] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.271 [2024-10-08 18:28:16.910744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.271 [2024-10-08 18:28:16.910767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.271 [2024-10-08 18:28:16.910775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.271 [2024-10-08 18:28:16.910785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.271 [2024-10-08 18:28:16.910794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:28.271 [2024-10-08 18:28:16.910805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.271 [2024-10-08 18:28:16.913983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.271 [2024-10-08 18:28:16.914021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.271 [2024-10-08 18:28:16.914032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.154 ms 00:17:28.271 [2024-10-08 18:28:16.914042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.271 [2024-10-08 18:28:16.914196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.271 [2024-10-08 18:28:16.914214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.271 [2024-10-08 18:28:16.914224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:17:28.271 [2024-10-08 18:28:16.914231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.271 [2024-10-08 18:28:16.924240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.271 [2024-10-08 18:28:16.924296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.272 [2024-10-08 18:28:16.924308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.924319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.924428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.924443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.272 [2024-10-08 18:28:16.924452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.924462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.924512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.924522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.272 [2024-10-08 18:28:16.924530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.924538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.924557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.924566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.272 [2024-10-08 18:28:16.924578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.924586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.944606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.944671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.272 [2024-10-08 18:28:16.944697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.944708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.960601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.960670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.272 [2024-10-08 18:28:16.960694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.960704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.960821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.960835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.272 [2024-10-08 18:28:16.960846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.960856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.960894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.960905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.272 [2024-10-08 18:28:16.960914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.960928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.961014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.961025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.272 [2024-10-08 18:28:16.961035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.961044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.961081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.961093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.272 [2024-10-08 18:28:16.961101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.961111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.961170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.961184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.272 [2024-10-08 18:28:16.961194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.961205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.961271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.272 [2024-10-08 18:28:16.961295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.272 [2024-10-08 18:28:16.961305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.272 [2024-10-08 18:28:16.961317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.272 [2024-10-08 18:28:16.961538] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 95.395 ms, result 0 00:17:28.534 00:17:28.534 00:17:28.534 18:28:17 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86816 00:17:28.534 18:28:17 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86816 00:17:28.534 18:28:17 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86816 ']' 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.534 18:28:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:28.796 [2024-10-08 18:28:17.411803] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:28.796 [2024-10-08 18:28:17.411960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86816 ] 00:17:28.796 [2024-10-08 18:28:17.545741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:28.796 [2024-10-08 18:28:17.564233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.796 [2024-10-08 18:28:17.637993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.742 18:28:18 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.742 18:28:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:29.743 18:28:18 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:29.743 [2024-10-08 18:28:18.477694] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:29.743 [2024-10-08 18:28:18.477801] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.007 [2024-10-08 18:28:18.657845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.657915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.007 [2024-10-08 18:28:18.657940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:30.007 [2024-10-08 18:28:18.657950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.660656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.660715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.007 [2024-10-08 18:28:18.660730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:17:30.007 [2024-10-08 18:28:18.660741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.660866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.007 [2024-10-08 18:28:18.661162] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.007 [2024-10-08 18:28:18.661181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.661190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.007 [2024-10-08 18:28:18.661203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:30.007 [2024-10-08 18:28:18.661212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.664100] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.007 [2024-10-08 18:28:18.668850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.668916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.007 [2024-10-08 18:28:18.668929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.757 ms 00:17:30.007 [2024-10-08 18:28:18.668941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.669027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.669045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.007 [2024-10-08 18:28:18.669056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:30.007 [2024-10-08 18:28:18.669071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.680666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.680718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.007 [2024-10-08 18:28:18.680730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.538 ms 00:17:30.007 [2024-10-08 18:28:18.680762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.680918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.680934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.007 [2024-10-08 18:28:18.680947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:30.007 [2024-10-08 18:28:18.680959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.680993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.681004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.007 [2024-10-08 18:28:18.681012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.007 [2024-10-08 18:28:18.681027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.681062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.007 [2024-10-08 18:28:18.683772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.683820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.007 [2024-10-08 18:28:18.683833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:17:30.007 [2024-10-08 18:28:18.683841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.683899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.683907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.007 [2024-10-08 18:28:18.683919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:30.007 [2024-10-08 18:28:18.683926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.683953] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.007 [2024-10-08 18:28:18.683978] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:30.007 [2024-10-08 18:28:18.684028] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.007 [2024-10-08 18:28:18.684047] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:30.007 [2024-10-08 18:28:18.684163] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:30.007 [2024-10-08 18:28:18.684175] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.007 [2024-10-08 18:28:18.684196] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:30.007 [2024-10-08 18:28:18.684207] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.007 [2024-10-08 18:28:18.684222] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.007 [2024-10-08 18:28:18.684231] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.007 [2024-10-08 18:28:18.684242] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.007 [2024-10-08 18:28:18.684249] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:30.007 [2024-10-08 18:28:18.684261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:30.007 [2024-10-08 18:28:18.684272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.684283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.007 [2024-10-08 18:28:18.684291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:17:30.007 [2024-10-08 18:28:18.684301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.684393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.007 [2024-10-08 18:28:18.684405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.007 [2024-10-08 18:28:18.684416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:30.007 [2024-10-08 18:28:18.684427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.007 [2024-10-08 18:28:18.684535] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.007 [2024-10-08 18:28:18.684561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.007 [2024-10-08 18:28:18.684571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.007 [2024-10-08 18:28:18.684586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.007 [2024-10-08 18:28:18.684595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.007 [2024-10-08 18:28:18.684605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.007 [2024-10-08 18:28:18.684613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.007 [2024-10-08 18:28:18.684624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.007 [2024-10-08 18:28:18.684632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.007 [2024-10-08 18:28:18.684643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.007 [2024-10-08 18:28:18.684651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.007 [2024-10-08 18:28:18.684662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.007 [2024-10-08 18:28:18.684670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.007 [2024-10-08 18:28:18.684690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.008 [2024-10-08 18:28:18.684698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:30.008 [2024-10-08 18:28:18.684709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.008 [2024-10-08 18:28:18.684731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:30.008 [2024-10-08 18:28:18.684740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.008 [2024-10-08 18:28:18.684777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.008 [2024-10-08 18:28:18.684795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.008 [2024-10-08 18:28:18.684805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.008 [2024-10-08 18:28:18.684825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.008 [2024-10-08 18:28:18.684834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.008 [2024-10-08 18:28:18.684853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.008 [2024-10-08 18:28:18.684863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:30.008 [2024-10-08 18:28:18.684881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.008 [2024-10-08 18:28:18.684889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.008 [2024-10-08 18:28:18.684906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.008 [2024-10-08 18:28:18.684919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:30.008 [2024-10-08 18:28:18.684926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.008 [2024-10-08 18:28:18.684937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:30.008 [2024-10-08 18:28:18.684945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:30.008 [2024-10-08 18:28:18.684955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:30.008 [2024-10-08 18:28:18.684973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:30.008 [2024-10-08 18:28:18.684980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.008 [2024-10-08 18:28:18.684988] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.008 [2024-10-08 18:28:18.684996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.008 [2024-10-08 18:28:18.685010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.008 [2024-10-08 18:28:18.685018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.008 [2024-10-08 18:28:18.685028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.008 [2024-10-08 18:28:18.685039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.008 [2024-10-08 18:28:18.685049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.008 [2024-10-08 18:28:18.685057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.008 [2024-10-08 18:28:18.685071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.008 [2024-10-08 18:28:18.685078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.008 [2024-10-08 18:28:18.685089] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.008 [2024-10-08 18:28:18.685099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.008 [2024-10-08 18:28:18.685118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:30.008 [2024-10-08 18:28:18.685128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.008 [2024-10-08 18:28:18.685136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:30.008 [2024-10-08 18:28:18.685146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:30.008 [2024-10-08 18:28:18.685154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:30.008 [2024-10-08 18:28:18.685163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:30.008 [2024-10-08 18:28:18.685171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:30.008 [2024-10-08 18:28:18.685181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:30.008 [2024-10-08 18:28:18.685189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:30.008 [2024-10-08 18:28:18.685238] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.008 [2024-10-08 18:28:18.685251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.008 [2024-10-08 18:28:18.685272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.008 [2024-10-08 18:28:18.685284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.008 [2024-10-08 18:28:18.685292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.008 [2024-10-08 18:28:18.685303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.685311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.008 [2024-10-08 18:28:18.685321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:17:30.008 [2024-10-08 18:28:18.685328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.705777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.705830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.008 [2024-10-08 18:28:18.705845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.357 ms 00:17:30.008 [2024-10-08 18:28:18.705854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.705999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.706014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.008 [2024-10-08 18:28:18.706025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:30.008 [2024-10-08 18:28:18.706039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.722661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.722713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.008 [2024-10-08 18:28:18.722727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.594 ms 00:17:30.008 [2024-10-08 18:28:18.722736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.722835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.722847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.008 [2024-10-08 18:28:18.722860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.008 [2024-10-08 18:28:18.722869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.723549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.723592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.008 [2024-10-08 18:28:18.723606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:17:30.008 [2024-10-08 18:28:18.723616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.723808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.723819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.008 [2024-10-08 18:28:18.723834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:17:30.008 [2024-10-08 18:28:18.723843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.744781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.744852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.008 [2024-10-08 18:28:18.744875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.876 ms 00:17:30.008 [2024-10-08 18:28:18.744895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.750135] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:30.008 [2024-10-08 18:28:18.750194] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.008 [2024-10-08 18:28:18.750213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.750223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.008 [2024-10-08 18:28:18.750236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.109 ms 00:17:30.008 [2024-10-08 18:28:18.750243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.766495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.766549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.008 [2024-10-08 18:28:18.766569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.125 ms 00:17:30.008 [2024-10-08 18:28:18.766578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.008 [2024-10-08 18:28:18.769662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.008 [2024-10-08 18:28:18.769713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.008 [2024-10-08 18:28:18.769727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:17:30.009 [2024-10-08 18:28:18.769735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.772493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.772540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.009 [2024-10-08 18:28:18.772554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:17:30.009 [2024-10-08 18:28:18.772561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.772970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.772992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.009 [2024-10-08 18:28:18.773005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:17:30.009 [2024-10-08 18:28:18.773014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.802705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.802789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.009 [2024-10-08 18:28:18.802810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.659 ms 00:17:30.009 [2024-10-08 18:28:18.802820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.812171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.009 [2024-10-08 18:28:18.836452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.836528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.009 [2024-10-08 18:28:18.836543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.531 ms 00:17:30.009 [2024-10-08 18:28:18.836557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.836666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.836681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.009 [2024-10-08 18:28:18.836691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:30.009 [2024-10-08 18:28:18.836707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.836805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.836818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.009 [2024-10-08 18:28:18.836833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:30.009 [2024-10-08 18:28:18.836843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.836874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.836889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.009 [2024-10-08 18:28:18.836897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.009 [2024-10-08 18:28:18.836911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.836958] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.009 [2024-10-08 18:28:18.836972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.836980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.009 [2024-10-08 18:28:18.836991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:30.009 [2024-10-08 18:28:18.836998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.844035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.844092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.009 [2024-10-08 18:28:18.844107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.005 ms 00:17:30.009 [2024-10-08 18:28:18.844117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.844229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.009 [2024-10-08 18:28:18.844240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.009 [2024-10-08 18:28:18.844252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:30.009 [2024-10-08 18:28:18.844261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.009 [2024-10-08 18:28:18.845599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.009 [2024-10-08 18:28:18.847071] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 187.313 ms, result 0 00:17:30.009 [2024-10-08 18:28:18.849420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.271 Some configs were skipped because the RPC state that can call them passed over. 00:17:30.271 18:28:18 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:30.271 [2024-10-08 18:28:19.078882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.271 [2024-10-08 18:28:19.078946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:30.271 [2024-10-08 18:28:19.078959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.190 ms 00:17:30.271 [2024-10-08 18:28:19.078970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.271 [2024-10-08 18:28:19.079013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.327 ms, result 0 00:17:30.271 true 00:17:30.271 18:28:19 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:30.532 [2024-10-08 18:28:19.286443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.532 [2024-10-08 18:28:19.286506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:30.532 [2024-10-08 18:28:19.286568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.497 ms 00:17:30.532 [2024-10-08 18:28:19.286577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.532 [2024-10-08 18:28:19.286620] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.688 ms, result 0 00:17:30.532 true 00:17:30.532 18:28:19 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86816 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86816 ']' 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86816 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86816 00:17:30.532 killing process with pid 86816 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86816' 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86816 00:17:30.532 18:28:19 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86816 00:17:30.795 [2024-10-08 18:28:19.532285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.532360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:30.795 [2024-10-08 18:28:19.532377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:30.795 [2024-10-08 18:28:19.532390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.532416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:30.795 [2024-10-08 18:28:19.533330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.533369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:30.795 [2024-10-08 18:28:19.533385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:17:30.795 [2024-10-08 18:28:19.533397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.533729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.533763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:30.795 [2024-10-08 18:28:19.533777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:17:30.795 [2024-10-08 18:28:19.533785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.538960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.539010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:30.795 [2024-10-08 18:28:19.539023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.149 ms 00:17:30.795 [2024-10-08 18:28:19.539031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.546162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.546202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:30.795 [2024-10-08 18:28:19.546224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.069 ms 00:17:30.795 [2024-10-08 18:28:19.546232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.549323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.549371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:30.795 [2024-10-08 18:28:19.549384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.012 ms 00:17:30.795 [2024-10-08 18:28:19.549393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.554622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.554672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:30.795 [2024-10-08 18:28:19.554686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.175 ms 00:17:30.795 [2024-10-08 18:28:19.554695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.554879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.554903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:30.795 [2024-10-08 18:28:19.554917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:30.795 [2024-10-08 18:28:19.554925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.558452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.558505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:30.795 [2024-10-08 18:28:19.558522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.500 ms 00:17:30.795 [2024-10-08 18:28:19.558530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.561652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.561700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:30.795 [2024-10-08 18:28:19.561713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.049 ms 00:17:30.795 [2024-10-08 18:28:19.561720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.564212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.564265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:30.795 [2024-10-08 18:28:19.564279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.419 ms 00:17:30.795 [2024-10-08 18:28:19.564287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.566551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.795 [2024-10-08 18:28:19.566603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:30.795 [2024-10-08 18:28:19.566616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:17:30.795 [2024-10-08 18:28:19.566622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.795 [2024-10-08 18:28:19.566672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:30.795 [2024-10-08 18:28:19.566689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:30.795 [2024-10-08 18:28:19.566828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.566998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:30.796 [2024-10-08 18:28:19.567671] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:30.796 [2024-10-08 18:28:19.567682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:30.796 [2024-10-08 18:28:19.567691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:30.796 [2024-10-08 18:28:19.567702] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:30.796 [2024-10-08 18:28:19.567714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:30.796 [2024-10-08 18:28:19.567725] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:30.796 [2024-10-08 18:28:19.567733] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:30.796 [2024-10-08 18:28:19.567743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:30.796 [2024-10-08 18:28:19.567767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:30.797 [2024-10-08 18:28:19.567775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:30.797 [2024-10-08 18:28:19.567782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:30.797 [2024-10-08 18:28:19.567792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.797 [2024-10-08 18:28:19.567801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:30.797 [2024-10-08 18:28:19.567817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:17:30.797 [2024-10-08 18:28:19.567824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.570908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.797 [2024-10-08 18:28:19.570948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:30.797 [2024-10-08 18:28:19.570962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.040 ms 00:17:30.797 [2024-10-08 18:28:19.570971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.571145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.797 [2024-10-08 18:28:19.571157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:30.797 [2024-10-08 18:28:19.571169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:30.797 [2024-10-08 18:28:19.571177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.582061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.582113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.797 [2024-10-08 18:28:19.582128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.582136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.582242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.582261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.797 [2024-10-08 18:28:19.582278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.582286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.582342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.582356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.797 [2024-10-08 18:28:19.582367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.582376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.582400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.582408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.797 [2024-10-08 18:28:19.582420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.582428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.602512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.602579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.797 [2024-10-08 18:28:19.602596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.602605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.797 [2024-10-08 18:28:19.618351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.797 [2024-10-08 18:28:19.618477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.797 [2024-10-08 18:28:19.618554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.797 [2024-10-08 18:28:19.618678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:30.797 [2024-10-08 18:28:19.618776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.797 [2024-10-08 18:28:19.618875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.618957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:30.797 [2024-10-08 18:28:19.618969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.797 [2024-10-08 18:28:19.618982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:30.797 [2024-10-08 18:28:19.618992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.797 [2024-10-08 18:28:19.619185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 86.855 ms, result 0 00:17:31.369 18:28:19 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:31.369 18:28:19 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:31.369 [2024-10-08 18:28:20.048451] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:31.369 [2024-10-08 18:28:20.048608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86859 ] 00:17:31.369 [2024-10-08 18:28:20.182316] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:31.369 [2024-10-08 18:28:20.202046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.631 [2024-10-08 18:28:20.275804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.631 [2024-10-08 18:28:20.424479] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.631 [2024-10-08 18:28:20.424586] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:31.894 [2024-10-08 18:28:20.589089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.589154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:31.894 [2024-10-08 18:28:20.589172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:31.894 [2024-10-08 18:28:20.589182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.591949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.592003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.894 [2024-10-08 18:28:20.592019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:17:31.894 [2024-10-08 18:28:20.592029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.592127] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:31.894 [2024-10-08 18:28:20.592412] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:31.894 [2024-10-08 18:28:20.592444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.592453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.894 [2024-10-08 18:28:20.592468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:17:31.894 [2024-10-08 18:28:20.592477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.595406] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:31.894 [2024-10-08 18:28:20.600203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.600266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:31.894 [2024-10-08 18:28:20.600283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:17:31.894 [2024-10-08 18:28:20.600292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.600383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.600394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:31.894 [2024-10-08 18:28:20.600404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:31.894 [2024-10-08 18:28:20.600412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.612079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.612133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.894 [2024-10-08 18:28:20.612150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.618 ms 00:17:31.894 [2024-10-08 18:28:20.612163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.612306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.612319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.894 [2024-10-08 18:28:20.612329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:31.894 [2024-10-08 18:28:20.612338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.612369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.612382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:31.894 [2024-10-08 18:28:20.612391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:31.894 [2024-10-08 18:28:20.612400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.612428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:31.894 [2024-10-08 18:28:20.615150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.615198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.894 [2024-10-08 18:28:20.615209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:17:31.894 [2024-10-08 18:28:20.615217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.615264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.615284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:31.894 [2024-10-08 18:28:20.615293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:31.894 [2024-10-08 18:28:20.615302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.615322] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:31.894 [2024-10-08 18:28:20.615348] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:31.894 [2024-10-08 18:28:20.615392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:31.894 [2024-10-08 18:28:20.615416] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:31.894 [2024-10-08 18:28:20.615534] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:31.894 [2024-10-08 18:28:20.615550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:31.894 [2024-10-08 18:28:20.615562] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:31.894 [2024-10-08 18:28:20.615573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:31.894 [2024-10-08 18:28:20.615584] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:31.894 [2024-10-08 18:28:20.615593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:31.894 [2024-10-08 18:28:20.615602] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:31.894 [2024-10-08 18:28:20.615610] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:31.894 [2024-10-08 18:28:20.615618] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:31.894 [2024-10-08 18:28:20.615629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.615640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:31.894 [2024-10-08 18:28:20.615649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:31.894 [2024-10-08 18:28:20.615656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.615746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.894 [2024-10-08 18:28:20.615798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:31.894 [2024-10-08 18:28:20.615808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:31.894 [2024-10-08 18:28:20.615817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.894 [2024-10-08 18:28:20.615922] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:31.894 [2024-10-08 18:28:20.615940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:31.894 [2024-10-08 18:28:20.615950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.894 [2024-10-08 18:28:20.615963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.894 [2024-10-08 18:28:20.615972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:31.894 [2024-10-08 18:28:20.615980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:31.894 [2024-10-08 18:28:20.615997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:31.894 [2024-10-08 18:28:20.616017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.894 [2024-10-08 18:28:20.616033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:31.894 [2024-10-08 18:28:20.616040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:31.894 [2024-10-08 18:28:20.616049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:31.894 [2024-10-08 18:28:20.616058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:31.894 [2024-10-08 18:28:20.616066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:31.894 [2024-10-08 18:28:20.616074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:31.894 [2024-10-08 18:28:20.616090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:31.894 [2024-10-08 18:28:20.616114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:31.894 [2024-10-08 18:28:20.616147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:31.894 [2024-10-08 18:28:20.616176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:31.894 [2024-10-08 18:28:20.616202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:31.894 [2024-10-08 18:28:20.616219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:31.894 [2024-10-08 18:28:20.616227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.894 [2024-10-08 18:28:20.616244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:31.894 [2024-10-08 18:28:20.616252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:31.894 [2024-10-08 18:28:20.616261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:31.894 [2024-10-08 18:28:20.616268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:31.894 [2024-10-08 18:28:20.616277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:31.894 [2024-10-08 18:28:20.616287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.894 [2024-10-08 18:28:20.616296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:31.894 [2024-10-08 18:28:20.616304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:31.894 [2024-10-08 18:28:20.616311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.895 [2024-10-08 18:28:20.616319] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:31.895 [2024-10-08 18:28:20.616329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:31.895 [2024-10-08 18:28:20.616339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:31.895 [2024-10-08 18:28:20.616347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:31.895 [2024-10-08 18:28:20.616354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:31.895 [2024-10-08 18:28:20.616361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:31.895 [2024-10-08 18:28:20.616368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:31.895 [2024-10-08 18:28:20.616376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:31.895 [2024-10-08 18:28:20.616382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:31.895 [2024-10-08 18:28:20.616389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:31.895 [2024-10-08 18:28:20.616400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:31.895 [2024-10-08 18:28:20.616416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:31.895 [2024-10-08 18:28:20.616439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:31.895 [2024-10-08 18:28:20.616447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:31.895 [2024-10-08 18:28:20.616456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:31.895 [2024-10-08 18:28:20.616464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:31.895 [2024-10-08 18:28:20.616471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:31.895 [2024-10-08 18:28:20.616479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:31.895 [2024-10-08 18:28:20.616486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:31.895 [2024-10-08 18:28:20.616494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:31.895 [2024-10-08 18:28:20.616501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:31.895 [2024-10-08 18:28:20.616537] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:31.895 [2024-10-08 18:28:20.616546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:31.895 [2024-10-08 18:28:20.616569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:31.895 [2024-10-08 18:28:20.616576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:31.895 [2024-10-08 18:28:20.616583] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:31.895 [2024-10-08 18:28:20.616591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.616601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:31.895 [2024-10-08 18:28:20.616609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:17:31.895 [2024-10-08 18:28:20.616616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.650959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.651037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.895 [2024-10-08 18:28:20.651054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.268 ms 00:17:31.895 [2024-10-08 18:28:20.651066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.651265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.651282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.895 [2024-10-08 18:28:20.651301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:31.895 [2024-10-08 18:28:20.651312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.667409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.667464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.895 [2024-10-08 18:28:20.667477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.068 ms 00:17:31.895 [2024-10-08 18:28:20.667485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.667571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.667589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.895 [2024-10-08 18:28:20.667602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.895 [2024-10-08 18:28:20.667611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.668324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.668367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.895 [2024-10-08 18:28:20.668379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.687 ms 00:17:31.895 [2024-10-08 18:28:20.668389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.668568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.668581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.895 [2024-10-08 18:28:20.668592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:17:31.895 [2024-10-08 18:28:20.668610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.678979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.679027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.895 [2024-10-08 18:28:20.679048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.343 ms 00:17:31.895 [2024-10-08 18:28:20.679061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.684134] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:31.895 [2024-10-08 18:28:20.684190] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:31.895 [2024-10-08 18:28:20.684204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.684213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:31.895 [2024-10-08 18:28:20.684223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.016 ms 00:17:31.895 [2024-10-08 18:28:20.684231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.700307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.700360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:31.895 [2024-10-08 18:28:20.700374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.990 ms 00:17:31.895 [2024-10-08 18:28:20.700393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.703737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.703806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:31.895 [2024-10-08 18:28:20.703818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.242 ms 00:17:31.895 [2024-10-08 18:28:20.703827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.706488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.706553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:31.895 [2024-10-08 18:28:20.706564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:17:31.895 [2024-10-08 18:28:20.706572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.706982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.707010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.895 [2024-10-08 18:28:20.707021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:17:31.895 [2024-10-08 18:28:20.707030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.895 [2024-10-08 18:28:20.736672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.895 [2024-10-08 18:28:20.736773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:31.895 [2024-10-08 18:28:20.736795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.612 ms 00:17:31.895 [2024-10-08 18:28:20.736805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.746087] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:32.158 [2024-10-08 18:28:20.771684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.771762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.158 [2024-10-08 18:28:20.771780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.754 ms 00:17:32.158 [2024-10-08 18:28:20.771790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.771919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.771933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:32.158 [2024-10-08 18:28:20.771944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:32.158 [2024-10-08 18:28:20.771953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.772037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.772048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.158 [2024-10-08 18:28:20.772057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:32.158 [2024-10-08 18:28:20.772066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.772097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.772107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.158 [2024-10-08 18:28:20.772117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:32.158 [2024-10-08 18:28:20.772126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.772171] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:32.158 [2024-10-08 18:28:20.772185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.772201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:32.158 [2024-10-08 18:28:20.772211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:32.158 [2024-10-08 18:28:20.772221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.779192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.779248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.158 [2024-10-08 18:28:20.779261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:17:32.158 [2024-10-08 18:28:20.779270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.779391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.158 [2024-10-08 18:28:20.779407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.158 [2024-10-08 18:28:20.779418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:32.158 [2024-10-08 18:28:20.779427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.158 [2024-10-08 18:28:20.780694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:32.158 [2024-10-08 18:28:20.782204] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 191.209 ms, result 0 00:17:32.158 [2024-10-08 18:28:20.784012] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:32.158 [2024-10-08 18:28:20.790925] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:33.102  [2024-10-08T18:28:22.896Z] Copying: 13/256 [MB] (13 MBps) [2024-10-08T18:28:23.840Z] Copying: 23092/262144 [kB] (9404 kBps) [2024-10-08T18:28:25.228Z] Copying: 32/256 [MB] (10 MBps) [2024-10-08T18:28:25.802Z] Copying: 42680/262144 [kB] (9184 kBps) [2024-10-08T18:28:27.193Z] Copying: 52488/262144 [kB] (9808 kBps) [2024-10-08T18:28:28.138Z] Copying: 61/256 [MB] (10 MBps) [2024-10-08T18:28:29.108Z] Copying: 78/256 [MB] (16 MBps) [2024-10-08T18:28:30.052Z] Copying: 107/256 [MB] (29 MBps) [2024-10-08T18:28:30.996Z] Copying: 138/256 [MB] (30 MBps) [2024-10-08T18:28:31.938Z] Copying: 163/256 [MB] (24 MBps) [2024-10-08T18:28:32.881Z] Copying: 189/256 [MB] (26 MBps) [2024-10-08T18:28:33.826Z] Copying: 217/256 [MB] (27 MBps) [2024-10-08T18:28:34.399Z] Copying: 244/256 [MB] (26 MBps) [2024-10-08T18:28:34.399Z] Copying: 256/256 [MB] (average 19 MBps)[2024-10-08 18:28:34.252722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.549 [2024-10-08 18:28:34.253803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.253833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.549 [2024-10-08 18:28:34.253849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:45.549 [2024-10-08 18:28:34.253857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.253878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.549 [2024-10-08 18:28:34.254287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.254309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.549 [2024-10-08 18:28:34.254319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:17:45.549 [2024-10-08 18:28:34.254328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.254575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.254593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.549 [2024-10-08 18:28:34.254602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:17:45.549 [2024-10-08 18:28:34.254611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.258322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.258345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.549 [2024-10-08 18:28:34.258355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:17:45.549 [2024-10-08 18:28:34.258363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.265292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.265318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.549 [2024-10-08 18:28:34.265328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:17:45.549 [2024-10-08 18:28:34.265335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.266836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.266866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.549 [2024-10-08 18:28:34.266875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:17:45.549 [2024-10-08 18:28:34.266882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.270927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.271050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.549 [2024-10-08 18:28:34.271099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.002 ms 00:17:45.549 [2024-10-08 18:28:34.271122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.271505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.271557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.549 [2024-10-08 18:28:34.271581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:45.549 [2024-10-08 18:28:34.271602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.274385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.274460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.549 [2024-10-08 18:28:34.274484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.730 ms 00:17:45.549 [2024-10-08 18:28:34.274503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.276564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.276631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.549 [2024-10-08 18:28:34.276654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.938 ms 00:17:45.549 [2024-10-08 18:28:34.276672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.278395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.278429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.549 [2024-10-08 18:28:34.278437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:17:45.549 [2024-10-08 18:28:34.278445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.279456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.549 [2024-10-08 18:28:34.279489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.549 [2024-10-08 18:28:34.279497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:17:45.549 [2024-10-08 18:28:34.279505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.549 [2024-10-08 18:28:34.279534] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.549 [2024-10-08 18:28:34.279548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.549 [2024-10-08 18:28:34.279558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.279992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.550 [2024-10-08 18:28:34.280267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.551 [2024-10-08 18:28:34.280321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.551 [2024-10-08 18:28:34.280329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:45.551 [2024-10-08 18:28:34.280337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.551 [2024-10-08 18:28:34.280345] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.551 [2024-10-08 18:28:34.280352] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.551 [2024-10-08 18:28:34.280360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.551 [2024-10-08 18:28:34.280368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.551 [2024-10-08 18:28:34.280383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.551 [2024-10-08 18:28:34.280391] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.551 [2024-10-08 18:28:34.280397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.551 [2024-10-08 18:28:34.280404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.551 [2024-10-08 18:28:34.280411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.551 [2024-10-08 18:28:34.280418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.551 [2024-10-08 18:28:34.280429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.878 ms 00:17:45.551 [2024-10-08 18:28:34.280436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.282318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.551 [2024-10-08 18:28:34.282345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.551 [2024-10-08 18:28:34.282355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:17:45.551 [2024-10-08 18:28:34.282362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.282465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.551 [2024-10-08 18:28:34.282475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.551 [2024-10-08 18:28:34.282488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:45.551 [2024-10-08 18:28:34.282496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.288575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.288612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.551 [2024-10-08 18:28:34.288622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.288630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.288762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.288781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.551 [2024-10-08 18:28:34.288790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.288798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.288838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.288847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.551 [2024-10-08 18:28:34.288855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.288863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.288880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.288892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.551 [2024-10-08 18:28:34.288903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.288910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.300573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.300620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.551 [2024-10-08 18:28:34.300630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.300639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.309809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.309860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.551 [2024-10-08 18:28:34.309871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.309879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.309911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.309920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.551 [2024-10-08 18:28:34.309933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.309941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.309970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.309981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.551 [2024-10-08 18:28:34.309989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.309998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.310068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.310079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.551 [2024-10-08 18:28:34.310087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.310095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.310125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.310134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.551 [2024-10-08 18:28:34.310142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.310152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.310191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.310210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.551 [2024-10-08 18:28:34.310219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.310226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.310275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.551 [2024-10-08 18:28:34.310285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.551 [2024-10-08 18:28:34.310293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.551 [2024-10-08 18:28:34.310303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.551 [2024-10-08 18:28:34.310453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.624 ms, result 0 00:17:45.813 00:17:45.813 00:17:45.813 18:28:34 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:45.813 18:28:34 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:46.387 18:28:35 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:46.387 [2024-10-08 18:28:35.149777] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:46.387 [2024-10-08 18:28:35.149906] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87027 ] 00:17:46.648 [2024-10-08 18:28:35.277910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:46.649 [2024-10-08 18:28:35.298705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.649 [2024-10-08 18:28:35.342503] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.649 [2024-10-08 18:28:35.445056] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.649 [2024-10-08 18:28:35.445136] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:46.953 [2024-10-08 18:28:35.603140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.603209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:46.953 [2024-10-08 18:28:35.603230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:46.953 [2024-10-08 18:28:35.603241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.605712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.605766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.953 [2024-10-08 18:28:35.605786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.448 ms 00:17:46.953 [2024-10-08 18:28:35.605798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.605895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:46.953 [2024-10-08 18:28:35.606183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:46.953 [2024-10-08 18:28:35.606213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.606226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.953 [2024-10-08 18:28:35.606244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:17:46.953 [2024-10-08 18:28:35.606257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.607973] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.953 [2024-10-08 18:28:35.611119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.611160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.953 [2024-10-08 18:28:35.611179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.149 ms 00:17:46.953 [2024-10-08 18:28:35.611191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.611276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.611293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.953 [2024-10-08 18:28:35.611308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:46.953 [2024-10-08 18:28:35.611321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.617686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.617719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.953 [2024-10-08 18:28:35.617734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.302 ms 00:17:46.953 [2024-10-08 18:28:35.617746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.617919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.617935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.953 [2024-10-08 18:28:35.617950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:46.953 [2024-10-08 18:28:35.617962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.618018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.618041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.953 [2024-10-08 18:28:35.618054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:46.953 [2024-10-08 18:28:35.618066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.618101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.953 [2024-10-08 18:28:35.619839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.619870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.953 [2024-10-08 18:28:35.619883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.749 ms 00:17:46.953 [2024-10-08 18:28:35.619895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.619947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.619970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.953 [2024-10-08 18:28:35.619984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:46.953 [2024-10-08 18:28:35.619996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.620023] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.953 [2024-10-08 18:28:35.620060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.953 [2024-10-08 18:28:35.620117] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.953 [2024-10-08 18:28:35.620141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.953 [2024-10-08 18:28:35.620291] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.953 [2024-10-08 18:28:35.620316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.953 [2024-10-08 18:28:35.620339] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.953 [2024-10-08 18:28:35.620356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.953 [2024-10-08 18:28:35.620371] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.953 [2024-10-08 18:28:35.620385] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.953 [2024-10-08 18:28:35.620397] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.953 [2024-10-08 18:28:35.620410] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.953 [2024-10-08 18:28:35.620423] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.953 [2024-10-08 18:28:35.620439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.620454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.953 [2024-10-08 18:28:35.620471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:17:46.953 [2024-10-08 18:28:35.620484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.620610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.953 [2024-10-08 18:28:35.620625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.953 [2024-10-08 18:28:35.620641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:46.953 [2024-10-08 18:28:35.620660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.953 [2024-10-08 18:28:35.620816] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.954 [2024-10-08 18:28:35.620848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.954 [2024-10-08 18:28:35.620863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.954 [2024-10-08 18:28:35.620889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.620902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.954 [2024-10-08 18:28:35.620914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.620933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.954 [2024-10-08 18:28:35.620948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.954 [2024-10-08 18:28:35.620960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.954 [2024-10-08 18:28:35.620971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.954 [2024-10-08 18:28:35.620983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.954 [2024-10-08 18:28:35.620994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.954 [2024-10-08 18:28:35.621006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.954 [2024-10-08 18:28:35.621018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.954 [2024-10-08 18:28:35.621030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.954 [2024-10-08 18:28:35.621041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.954 [2024-10-08 18:28:35.621064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.954 [2024-10-08 18:28:35.621100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.954 [2024-10-08 18:28:35.621140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.954 [2024-10-08 18:28:35.621175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.954 [2024-10-08 18:28:35.621210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.954 [2024-10-08 18:28:35.621244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.954 [2024-10-08 18:28:35.621266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.954 [2024-10-08 18:28:35.621278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.954 [2024-10-08 18:28:35.621289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.954 [2024-10-08 18:28:35.621301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.954 [2024-10-08 18:28:35.621312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.954 [2024-10-08 18:28:35.621326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.954 [2024-10-08 18:28:35.621350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.954 [2024-10-08 18:28:35.621364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621375] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.954 [2024-10-08 18:28:35.621389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.954 [2024-10-08 18:28:35.621402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.954 [2024-10-08 18:28:35.621439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.954 [2024-10-08 18:28:35.621452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.954 [2024-10-08 18:28:35.621463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.954 [2024-10-08 18:28:35.621475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.954 [2024-10-08 18:28:35.621487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.954 [2024-10-08 18:28:35.621499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.954 [2024-10-08 18:28:35.621513] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.954 [2024-10-08 18:28:35.621533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.954 [2024-10-08 18:28:35.621562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.954 [2024-10-08 18:28:35.621575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.954 [2024-10-08 18:28:35.621586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.954 [2024-10-08 18:28:35.621600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.954 [2024-10-08 18:28:35.621613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.954 [2024-10-08 18:28:35.621627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.954 [2024-10-08 18:28:35.621639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.954 [2024-10-08 18:28:35.621652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.954 [2024-10-08 18:28:35.621665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.954 [2024-10-08 18:28:35.621728] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.954 [2024-10-08 18:28:35.621743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.954 [2024-10-08 18:28:35.621784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.954 [2024-10-08 18:28:35.621797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.954 [2024-10-08 18:28:35.621811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.954 [2024-10-08 18:28:35.621824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.621841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.954 [2024-10-08 18:28:35.621855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:17:46.954 [2024-10-08 18:28:35.621866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.645129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.645188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.954 [2024-10-08 18:28:35.645221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:17:46.954 [2024-10-08 18:28:35.645236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.645502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.645581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.954 [2024-10-08 18:28:35.645614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:17:46.954 [2024-10-08 18:28:35.645631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.655805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.655847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.954 [2024-10-08 18:28:35.655864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.128 ms 00:17:46.954 [2024-10-08 18:28:35.655876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.655999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.656015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.954 [2024-10-08 18:28:35.656033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.954 [2024-10-08 18:28:35.656046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.656490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.656522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.954 [2024-10-08 18:28:35.656537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.412 ms 00:17:46.954 [2024-10-08 18:28:35.656550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.656775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.656802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.954 [2024-10-08 18:28:35.656817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:46.954 [2024-10-08 18:28:35.656836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.954 [2024-10-08 18:28:35.662975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.954 [2024-10-08 18:28:35.663008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.955 [2024-10-08 18:28:35.663022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:17:46.955 [2024-10-08 18:28:35.663039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.665958] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.955 [2024-10-08 18:28:35.665997] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.955 [2024-10-08 18:28:35.666015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.666027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.955 [2024-10-08 18:28:35.666040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:17:46.955 [2024-10-08 18:28:35.666051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.680942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.680985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.955 [2024-10-08 18:28:35.681002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.802 ms 00:17:46.955 [2024-10-08 18:28:35.681014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.683226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.683260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.955 [2024-10-08 18:28:35.683273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.094 ms 00:17:46.955 [2024-10-08 18:28:35.683284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.684784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.684821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.955 [2024-10-08 18:28:35.684834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.452 ms 00:17:46.955 [2024-10-08 18:28:35.684845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.685238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.685265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.955 [2024-10-08 18:28:35.685282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:17:46.955 [2024-10-08 18:28:35.685295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.703549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.703610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.955 [2024-10-08 18:28:35.703631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.193 ms 00:17:46.955 [2024-10-08 18:28:35.703644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.711567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.955 [2024-10-08 18:28:35.728849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.728906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.955 [2024-10-08 18:28:35.728926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.069 ms 00:17:46.955 [2024-10-08 18:28:35.728938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.729098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.729116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.955 [2024-10-08 18:28:35.729130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:46.955 [2024-10-08 18:28:35.729144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.729229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.729255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.955 [2024-10-08 18:28:35.729269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:46.955 [2024-10-08 18:28:35.729282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.729321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.729336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.955 [2024-10-08 18:28:35.729349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:46.955 [2024-10-08 18:28:35.729362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.729409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.955 [2024-10-08 18:28:35.729438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.729451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.955 [2024-10-08 18:28:35.729465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:46.955 [2024-10-08 18:28:35.729478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.733328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.733369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.955 [2024-10-08 18:28:35.733385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.814 ms 00:17:46.955 [2024-10-08 18:28:35.733397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.733543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.955 [2024-10-08 18:28:35.733564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.955 [2024-10-08 18:28:35.733579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:46.955 [2024-10-08 18:28:35.733592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.955 [2024-10-08 18:28:35.734523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.955 [2024-10-08 18:28:35.735550] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.080 ms, result 0 00:17:46.955 [2024-10-08 18:28:35.736685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.955 [2024-10-08 18:28:35.745949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.238  [2024-10-08T18:28:36.088Z] Copying: 4096/4096 [kB] (average 21 MBps)[2024-10-08 18:28:35.931193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.238 [2024-10-08 18:28:35.932625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.238 [2024-10-08 18:28:35.932668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:47.238 [2024-10-08 18:28:35.932690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:47.238 [2024-10-08 18:28:35.932699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.238 [2024-10-08 18:28:35.932722] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:47.238 [2024-10-08 18:28:35.933294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.238 [2024-10-08 18:28:35.933323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:47.238 [2024-10-08 18:28:35.933333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:17:47.238 [2024-10-08 18:28:35.933342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.238 [2024-10-08 18:28:35.935158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.238 [2024-10-08 18:28:35.935198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:47.238 [2024-10-08 18:28:35.935209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:17:47.238 [2024-10-08 18:28:35.935217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.238 [2024-10-08 18:28:35.939405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.939433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:47.239 [2024-10-08 18:28:35.939443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:17:47.239 [2024-10-08 18:28:35.939451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.946453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.946482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:47.239 [2024-10-08 18:28:35.946492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.975 ms 00:17:47.239 [2024-10-08 18:28:35.946500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.948716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.948763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:47.239 [2024-10-08 18:28:35.948774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.169 ms 00:17:47.239 [2024-10-08 18:28:35.948782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.952690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.952724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:47.239 [2024-10-08 18:28:35.952740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.876 ms 00:17:47.239 [2024-10-08 18:28:35.952748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.952894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.952904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:47.239 [2024-10-08 18:28:35.952913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:47.239 [2024-10-08 18:28:35.952921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.954807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.954840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:47.239 [2024-10-08 18:28:35.954850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.866 ms 00:17:47.239 [2024-10-08 18:28:35.954858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.956719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.956748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:47.239 [2024-10-08 18:28:35.956771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:17:47.239 [2024-10-08 18:28:35.956778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.957964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.957993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:47.239 [2024-10-08 18:28:35.958002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:17:47.239 [2024-10-08 18:28:35.958009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.959377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.239 [2024-10-08 18:28:35.959407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:47.239 [2024-10-08 18:28:35.959416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:17:47.239 [2024-10-08 18:28:35.959423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.239 [2024-10-08 18:28:35.959451] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:47.239 [2024-10-08 18:28:35.959467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:47.239 [2024-10-08 18:28:35.959939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.959998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:47.240 [2024-10-08 18:28:35.960249] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:47.240 [2024-10-08 18:28:35.960258] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:47.240 [2024-10-08 18:28:35.960267] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:47.240 [2024-10-08 18:28:35.960276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:47.240 [2024-10-08 18:28:35.960283] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:47.240 [2024-10-08 18:28:35.960292] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:47.240 [2024-10-08 18:28:35.960300] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:47.240 [2024-10-08 18:28:35.960315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:47.240 [2024-10-08 18:28:35.960323] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:47.240 [2024-10-08 18:28:35.960329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:47.240 [2024-10-08 18:28:35.960336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:47.240 [2024-10-08 18:28:35.960343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.240 [2024-10-08 18:28:35.960353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:47.240 [2024-10-08 18:28:35.960362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:17:47.240 [2024-10-08 18:28:35.960374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.962240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.240 [2024-10-08 18:28:35.962266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:47.240 [2024-10-08 18:28:35.962276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:17:47.240 [2024-10-08 18:28:35.962284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.962384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.240 [2024-10-08 18:28:35.962393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:47.240 [2024-10-08 18:28:35.962403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:47.240 [2024-10-08 18:28:35.962414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.968392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.968431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.240 [2024-10-08 18:28:35.968441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.968450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.968522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.968532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.240 [2024-10-08 18:28:35.968540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.968548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.968587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.968597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.240 [2024-10-08 18:28:35.968605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.968615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.968634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.968645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.240 [2024-10-08 18:28:35.968653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.968661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.980232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.980277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.240 [2024-10-08 18:28:35.980288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.980297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.240 [2024-10-08 18:28:35.989434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.240 [2024-10-08 18:28:35.989499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.240 [2024-10-08 18:28:35.989557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.240 [2024-10-08 18:28:35.989650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:47.240 [2024-10-08 18:28:35.989704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.240 [2024-10-08 18:28:35.989791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.240 [2024-10-08 18:28:35.989800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.240 [2024-10-08 18:28:35.989809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.240 [2024-10-08 18:28:35.989859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:47.241 [2024-10-08 18:28:35.989868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.241 [2024-10-08 18:28:35.989880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:47.241 [2024-10-08 18:28:35.989888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.241 [2024-10-08 18:28:35.990031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.391 ms, result 0 00:17:47.502 00:17:47.502 00:17:47.502 18:28:36 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=87041 00:17:47.502 18:28:36 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 87041 00:17:47.502 18:28:36 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:47.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 87041 ']' 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:47.502 18:28:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:47.502 [2024-10-08 18:28:36.289121] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:47.502 [2024-10-08 18:28:36.289252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87041 ] 00:17:47.764 [2024-10-08 18:28:36.414174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:47.764 [2024-10-08 18:28:36.434905] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.764 [2024-10-08 18:28:36.479151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.336 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:48.336 18:28:37 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:48.336 18:28:37 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:48.598 [2024-10-08 18:28:37.364647] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.598 [2024-10-08 18:28:37.364788] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.861 [2024-10-08 18:28:37.539431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.539510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.861 [2024-10-08 18:28:37.539528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:48.861 [2024-10-08 18:28:37.539537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.541992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.542034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.861 [2024-10-08 18:28:37.542046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:17:48.861 [2024-10-08 18:28:37.542054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.542231] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.861 [2024-10-08 18:28:37.542514] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.861 [2024-10-08 18:28:37.542545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.542554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.861 [2024-10-08 18:28:37.542565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:17:48.861 [2024-10-08 18:28:37.542576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.544143] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:48.861 [2024-10-08 18:28:37.547097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.547140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:48.861 [2024-10-08 18:28:37.547151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.961 ms 00:17:48.861 [2024-10-08 18:28:37.547162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.547230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.547244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:48.861 [2024-10-08 18:28:37.547253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:48.861 [2024-10-08 18:28:37.547262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.553972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.554021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.861 [2024-10-08 18:28:37.554034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.658 ms 00:17:48.861 [2024-10-08 18:28:37.554044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.554152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.554164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.861 [2024-10-08 18:28:37.554173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.861 [2024-10-08 18:28:37.554182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.554223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.554234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.861 [2024-10-08 18:28:37.554242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:48.861 [2024-10-08 18:28:37.554255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.554279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:48.861 [2024-10-08 18:28:37.555986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.556014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.861 [2024-10-08 18:28:37.556025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.709 ms 00:17:48.861 [2024-10-08 18:28:37.556037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.556087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.861 [2024-10-08 18:28:37.556097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.861 [2024-10-08 18:28:37.556107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:48.861 [2024-10-08 18:28:37.556114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.861 [2024-10-08 18:28:37.556137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:48.861 [2024-10-08 18:28:37.556157] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:48.861 [2024-10-08 18:28:37.556200] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:48.861 [2024-10-08 18:28:37.556218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:48.861 [2024-10-08 18:28:37.556327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.861 [2024-10-08 18:28:37.556345] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.861 [2024-10-08 18:28:37.556358] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.861 [2024-10-08 18:28:37.556369] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.861 [2024-10-08 18:28:37.556384] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.861 [2024-10-08 18:28:37.556392] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:48.861 [2024-10-08 18:28:37.556401] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.861 [2024-10-08 18:28:37.556408] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.862 [2024-10-08 18:28:37.556421] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.862 [2024-10-08 18:28:37.556430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.556439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.862 [2024-10-08 18:28:37.556446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:17:48.862 [2024-10-08 18:28:37.556455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.556542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.556559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.862 [2024-10-08 18:28:37.556566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:48.862 [2024-10-08 18:28:37.556576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.556679] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.862 [2024-10-08 18:28:37.556699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.862 [2024-10-08 18:28:37.556709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.862 [2024-10-08 18:28:37.556739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.862 [2024-10-08 18:28:37.556779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.862 [2024-10-08 18:28:37.556796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.862 [2024-10-08 18:28:37.556805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:48.862 [2024-10-08 18:28:37.556812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.862 [2024-10-08 18:28:37.556828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.862 [2024-10-08 18:28:37.556836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:48.862 [2024-10-08 18:28:37.556845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.862 [2024-10-08 18:28:37.556861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.862 [2024-10-08 18:28:37.556888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.862 [2024-10-08 18:28:37.556914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.862 [2024-10-08 18:28:37.556940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.862 [2024-10-08 18:28:37.556967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:48.862 [2024-10-08 18:28:37.556974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.862 [2024-10-08 18:28:37.556983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.862 [2024-10-08 18:28:37.556990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:48.862 [2024-10-08 18:28:37.557001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.862 [2024-10-08 18:28:37.557009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.862 [2024-10-08 18:28:37.557019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:48.862 [2024-10-08 18:28:37.557026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.862 [2024-10-08 18:28:37.557035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.862 [2024-10-08 18:28:37.557041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:48.862 [2024-10-08 18:28:37.557049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.557055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.862 [2024-10-08 18:28:37.557063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:48.862 [2024-10-08 18:28:37.557069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.557077] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.862 [2024-10-08 18:28:37.557085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.862 [2024-10-08 18:28:37.557093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.862 [2024-10-08 18:28:37.557100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.862 [2024-10-08 18:28:37.557110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.862 [2024-10-08 18:28:37.557117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.862 [2024-10-08 18:28:37.557125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.862 [2024-10-08 18:28:37.557132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.862 [2024-10-08 18:28:37.557141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.862 [2024-10-08 18:28:37.557148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.862 [2024-10-08 18:28:37.557158] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.862 [2024-10-08 18:28:37.557167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:48.862 [2024-10-08 18:28:37.557186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:48.862 [2024-10-08 18:28:37.557195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:48.862 [2024-10-08 18:28:37.557202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:48.862 [2024-10-08 18:28:37.557213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:48.862 [2024-10-08 18:28:37.557220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:48.862 [2024-10-08 18:28:37.557229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:48.862 [2024-10-08 18:28:37.557236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:48.862 [2024-10-08 18:28:37.557244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:48.862 [2024-10-08 18:28:37.557251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:48.862 [2024-10-08 18:28:37.557293] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.862 [2024-10-08 18:28:37.557302] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.862 [2024-10-08 18:28:37.557321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.862 [2024-10-08 18:28:37.557331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.862 [2024-10-08 18:28:37.557339] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.862 [2024-10-08 18:28:37.557348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.557356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.862 [2024-10-08 18:28:37.557364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:17:48.862 [2024-10-08 18:28:37.557372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.568948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.568986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.862 [2024-10-08 18:28:37.568999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.507 ms 00:17:48.862 [2024-10-08 18:28:37.569008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.569141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.569168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:48.862 [2024-10-08 18:28:37.569179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:48.862 [2024-10-08 18:28:37.569187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.579344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.579384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.862 [2024-10-08 18:28:37.579397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.131 ms 00:17:48.862 [2024-10-08 18:28:37.579407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.579481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.862 [2024-10-08 18:28:37.579491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.862 [2024-10-08 18:28:37.579501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:48.862 [2024-10-08 18:28:37.579509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.862 [2024-10-08 18:28:37.579935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.579961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.863 [2024-10-08 18:28:37.579971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:48.863 [2024-10-08 18:28:37.579983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.580125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.580143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.863 [2024-10-08 18:28:37.580156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:17:48.863 [2024-10-08 18:28:37.580164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.600615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.600688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.863 [2024-10-08 18:28:37.600712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.420 ms 00:17:48.863 [2024-10-08 18:28:37.600726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.603629] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:48.863 [2024-10-08 18:28:37.603672] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:48.863 [2024-10-08 18:28:37.603688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.603697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:48.863 [2024-10-08 18:28:37.603709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.722 ms 00:17:48.863 [2024-10-08 18:28:37.603716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.618646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.618693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:48.863 [2024-10-08 18:28:37.618710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.862 ms 00:17:48.863 [2024-10-08 18:28:37.618718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.620829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.620879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:48.863 [2024-10-08 18:28:37.620891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.000 ms 00:17:48.863 [2024-10-08 18:28:37.620899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.622299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.622332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:48.863 [2024-10-08 18:28:37.622345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:17:48.863 [2024-10-08 18:28:37.622354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.622703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.622727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.863 [2024-10-08 18:28:37.622738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:17:48.863 [2024-10-08 18:28:37.622762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.640902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.640966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:48.863 [2024-10-08 18:28:37.640985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.095 ms 00:17:48.863 [2024-10-08 18:28:37.640994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.648527] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:48.863 [2024-10-08 18:28:37.665431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.665498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:48.863 [2024-10-08 18:28:37.665512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.368 ms 00:17:48.863 [2024-10-08 18:28:37.665522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.665634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.665650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:48.863 [2024-10-08 18:28:37.665660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.863 [2024-10-08 18:28:37.665672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.665731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.665742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.863 [2024-10-08 18:28:37.665771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:48.863 [2024-10-08 18:28:37.665780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.665806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.665822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.863 [2024-10-08 18:28:37.665830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:48.863 [2024-10-08 18:28:37.665839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.665875] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:48.863 [2024-10-08 18:28:37.665887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.665895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:48.863 [2024-10-08 18:28:37.665904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:48.863 [2024-10-08 18:28:37.665915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.669769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.669803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.863 [2024-10-08 18:28:37.669816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.829 ms 00:17:48.863 [2024-10-08 18:28:37.669824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.669931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.863 [2024-10-08 18:28:37.669942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.863 [2024-10-08 18:28:37.669953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:17:48.863 [2024-10-08 18:28:37.669961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.863 [2024-10-08 18:28:37.670876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.863 [2024-10-08 18:28:37.671892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 131.119 ms, result 0 00:17:48.863 [2024-10-08 18:28:37.673181] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.863 Some configs were skipped because the RPC state that can call them passed over. 00:17:49.125 18:28:37 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:49.125 [2024-10-08 18:28:37.895971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.125 [2024-10-08 18:28:37.896047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:49.125 [2024-10-08 18:28:37.896068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.504 ms 00:17:49.125 [2024-10-08 18:28:37.896083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.125 [2024-10-08 18:28:37.896130] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.669 ms, result 0 00:17:49.125 true 00:17:49.125 18:28:37 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:49.386 [2024-10-08 18:28:38.087780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.386 [2024-10-08 18:28:38.087850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:49.386 [2024-10-08 18:28:38.087872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:17:49.386 [2024-10-08 18:28:38.087883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.386 [2024-10-08 18:28:38.087936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.319 ms, result 0 00:17:49.386 true 00:17:49.386 18:28:38 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 87041 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 87041 ']' 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 87041 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87041 00:17:49.386 killing process with pid 87041 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87041' 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 87041 00:17:49.386 18:28:38 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 87041 00:17:49.650 [2024-10-08 18:28:38.259257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.259319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.650 [2024-10-08 18:28:38.259339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:49.650 [2024-10-08 18:28:38.259355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.259387] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.650 [2024-10-08 18:28:38.260006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.260041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.650 [2024-10-08 18:28:38.260059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:17:49.650 [2024-10-08 18:28:38.260074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.260425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.260450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.650 [2024-10-08 18:28:38.260467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:17:49.650 [2024-10-08 18:28:38.260481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.264711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.264763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.650 [2024-10-08 18:28:38.264780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.197 ms 00:17:49.650 [2024-10-08 18:28:38.264792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.271997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.272033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.650 [2024-10-08 18:28:38.272052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:17:49.650 [2024-10-08 18:28:38.272064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.274614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.274655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.650 [2024-10-08 18:28:38.274671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:17:49.650 [2024-10-08 18:28:38.274681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.278660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.278701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.650 [2024-10-08 18:28:38.278717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.929 ms 00:17:49.650 [2024-10-08 18:28:38.278730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.278915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.278941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.650 [2024-10-08 18:28:38.278957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:17:49.650 [2024-10-08 18:28:38.278970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.281106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.281142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:49.650 [2024-10-08 18:28:38.281164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.103 ms 00:17:49.650 [2024-10-08 18:28:38.281175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.282607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.282641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:49.650 [2024-10-08 18:28:38.282656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:17:49.650 [2024-10-08 18:28:38.282666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.283899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.283933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.650 [2024-10-08 18:28:38.283948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:17:49.650 [2024-10-08 18:28:38.283959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.285174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.650 [2024-10-08 18:28:38.285208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.650 [2024-10-08 18:28:38.285223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:17:49.650 [2024-10-08 18:28:38.285233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.650 [2024-10-08 18:28:38.285328] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.650 [2024-10-08 18:28:38.285352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.650 [2024-10-08 18:28:38.285992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.651 [2024-10-08 18:28:38.286890] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.651 [2024-10-08 18:28:38.286906] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:49.651 [2024-10-08 18:28:38.286919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.651 [2024-10-08 18:28:38.286934] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.651 [2024-10-08 18:28:38.286949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.651 [2024-10-08 18:28:38.286964] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.651 [2024-10-08 18:28:38.286976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.651 [2024-10-08 18:28:38.286992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.651 [2024-10-08 18:28:38.287013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.651 [2024-10-08 18:28:38.287027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.651 [2024-10-08 18:28:38.287038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.651 [2024-10-08 18:28:38.287053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.651 [2024-10-08 18:28:38.287066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.651 [2024-10-08 18:28:38.287088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:17:49.651 [2024-10-08 18:28:38.287100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.289096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.651 [2024-10-08 18:28:38.289134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.651 [2024-10-08 18:28:38.289152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.963 ms 00:17:49.651 [2024-10-08 18:28:38.289165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.289296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.651 [2024-10-08 18:28:38.289313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.651 [2024-10-08 18:28:38.289329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:49.651 [2024-10-08 18:28:38.289342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.296050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.296094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.651 [2024-10-08 18:28:38.296112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.296123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.296262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.296286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.651 [2024-10-08 18:28:38.296306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.296320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.296389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.296407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.651 [2024-10-08 18:28:38.296424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.296436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.296468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.296488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.651 [2024-10-08 18:28:38.296504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.296516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.308574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.308627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.651 [2024-10-08 18:28:38.308647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.308658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.317959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.651 [2024-10-08 18:28:38.318037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.651 [2024-10-08 18:28:38.318187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.651 [2024-10-08 18:28:38.318276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.651 [2024-10-08 18:28:38.318424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.651 [2024-10-08 18:28:38.318524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.651 [2024-10-08 18:28:38.318629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.318720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.651 [2024-10-08 18:28:38.318744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.651 [2024-10-08 18:28:38.318847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.651 [2024-10-08 18:28:38.318861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.651 [2024-10-08 18:28:38.319092] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.797 ms, result 0 00:17:49.913 18:28:38 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.913 [2024-10-08 18:28:38.626526] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:49.913 [2024-10-08 18:28:38.626640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87077 ] 00:17:49.913 [2024-10-08 18:28:38.755457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:50.174 [2024-10-08 18:28:38.777696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.174 [2024-10-08 18:28:38.822722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.174 [2024-10-08 18:28:38.926614] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.174 [2024-10-08 18:28:38.926709] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.436 [2024-10-08 18:28:39.082009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.082079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.436 [2024-10-08 18:28:39.082098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.436 [2024-10-08 18:28:39.082110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.084582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.084627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.436 [2024-10-08 18:28:39.084645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.449 ms 00:17:50.436 [2024-10-08 18:28:39.084661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.084888] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.436 [2024-10-08 18:28:39.085208] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.436 [2024-10-08 18:28:39.085237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.085250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.436 [2024-10-08 18:28:39.085273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:17:50.436 [2024-10-08 18:28:39.085286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.086817] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:50.436 [2024-10-08 18:28:39.089532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.089573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:50.436 [2024-10-08 18:28:39.089587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.718 ms 00:17:50.436 [2024-10-08 18:28:39.089603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.089683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.089699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:50.436 [2024-10-08 18:28:39.089714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:50.436 [2024-10-08 18:28:39.089727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.096295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.096338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.436 [2024-10-08 18:28:39.096352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.485 ms 00:17:50.436 [2024-10-08 18:28:39.096367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.096538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.096569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.436 [2024-10-08 18:28:39.096583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:17:50.436 [2024-10-08 18:28:39.096595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.096640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.096657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.436 [2024-10-08 18:28:39.096671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:50.436 [2024-10-08 18:28:39.096683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.096717] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.436 [2024-10-08 18:28:39.098466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.098499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.436 [2024-10-08 18:28:39.098519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.759 ms 00:17:50.436 [2024-10-08 18:28:39.098535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.098589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.098613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.436 [2024-10-08 18:28:39.098627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:50.436 [2024-10-08 18:28:39.098644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.098677] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.436 [2024-10-08 18:28:39.098706] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:50.436 [2024-10-08 18:28:39.098771] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.436 [2024-10-08 18:28:39.098807] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:50.436 [2024-10-08 18:28:39.098954] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.436 [2024-10-08 18:28:39.098985] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.436 [2024-10-08 18:28:39.099001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.436 [2024-10-08 18:28:39.099017] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.436 [2024-10-08 18:28:39.099031] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.436 [2024-10-08 18:28:39.099045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.436 [2024-10-08 18:28:39.099058] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.436 [2024-10-08 18:28:39.099071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.436 [2024-10-08 18:28:39.099083] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.436 [2024-10-08 18:28:39.099099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.099114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.436 [2024-10-08 18:28:39.099130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:17:50.436 [2024-10-08 18:28:39.099144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.099272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.436 [2024-10-08 18:28:39.099295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.436 [2024-10-08 18:28:39.099311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:50.436 [2024-10-08 18:28:39.099326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.436 [2024-10-08 18:28:39.099481] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.436 [2024-10-08 18:28:39.099509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.436 [2024-10-08 18:28:39.099526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.437 [2024-10-08 18:28:39.099571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.437 [2024-10-08 18:28:39.099618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.437 [2024-10-08 18:28:39.099641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.437 [2024-10-08 18:28:39.099652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.437 [2024-10-08 18:28:39.099663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.437 [2024-10-08 18:28:39.099674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.437 [2024-10-08 18:28:39.099686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:50.437 [2024-10-08 18:28:39.099697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.437 [2024-10-08 18:28:39.099719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.437 [2024-10-08 18:28:39.099766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.437 [2024-10-08 18:28:39.099801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.437 [2024-10-08 18:28:39.099841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.437 [2024-10-08 18:28:39.099873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.437 [2024-10-08 18:28:39.099895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.437 [2024-10-08 18:28:39.099907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:50.437 [2024-10-08 18:28:39.099918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.437 [2024-10-08 18:28:39.099929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.437 [2024-10-08 18:28:39.099940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:50.437 [2024-10-08 18:28:39.099952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.437 [2024-10-08 18:28:39.099964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.437 [2024-10-08 18:28:39.099976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:50.437 [2024-10-08 18:28:39.099987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.100002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.437 [2024-10-08 18:28:39.100014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:50.437 [2024-10-08 18:28:39.100024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.100036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.437 [2024-10-08 18:28:39.100053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.437 [2024-10-08 18:28:39.100072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.437 [2024-10-08 18:28:39.100087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.437 [2024-10-08 18:28:39.100113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.437 [2024-10-08 18:28:39.100129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.437 [2024-10-08 18:28:39.100147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.437 [2024-10-08 18:28:39.100164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.437 [2024-10-08 18:28:39.100183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.437 [2024-10-08 18:28:39.100204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.437 [2024-10-08 18:28:39.100222] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.437 [2024-10-08 18:28:39.100244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.437 [2024-10-08 18:28:39.100282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:50.437 [2024-10-08 18:28:39.100294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.437 [2024-10-08 18:28:39.100306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:50.437 [2024-10-08 18:28:39.100319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:50.437 [2024-10-08 18:28:39.100335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:50.437 [2024-10-08 18:28:39.100348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:50.437 [2024-10-08 18:28:39.100367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:50.437 [2024-10-08 18:28:39.100382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:50.437 [2024-10-08 18:28:39.100395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:50.437 [2024-10-08 18:28:39.100458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.437 [2024-10-08 18:28:39.100472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.437 [2024-10-08 18:28:39.100505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.437 [2024-10-08 18:28:39.100517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.437 [2024-10-08 18:28:39.100530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.437 [2024-10-08 18:28:39.100542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.100558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.437 [2024-10-08 18:28:39.100571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:17:50.437 [2024-10-08 18:28:39.100586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.123290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.123368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.437 [2024-10-08 18:28:39.123397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:17:50.437 [2024-10-08 18:28:39.123415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.123739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.123804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.437 [2024-10-08 18:28:39.123845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:50.437 [2024-10-08 18:28:39.123867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.134409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.134462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.437 [2024-10-08 18:28:39.134478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.490 ms 00:17:50.437 [2024-10-08 18:28:39.134490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.134619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.134639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.437 [2024-10-08 18:28:39.134654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.437 [2024-10-08 18:28:39.134671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.135126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.135160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.437 [2024-10-08 18:28:39.135175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:50.437 [2024-10-08 18:28:39.135196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.135387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.135411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.437 [2024-10-08 18:28:39.135430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:17:50.437 [2024-10-08 18:28:39.135446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.141656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.437 [2024-10-08 18:28:39.141693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.437 [2024-10-08 18:28:39.141714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.178 ms 00:17:50.437 [2024-10-08 18:28:39.141728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.437 [2024-10-08 18:28:39.144603] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:50.437 [2024-10-08 18:28:39.144648] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:50.437 [2024-10-08 18:28:39.144665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.144678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:50.438 [2024-10-08 18:28:39.144691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.778 ms 00:17:50.438 [2024-10-08 18:28:39.144703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.159461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.159514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:50.438 [2024-10-08 18:28:39.159533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.595 ms 00:17:50.438 [2024-10-08 18:28:39.159545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.162253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.162296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:50.438 [2024-10-08 18:28:39.162312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:17:50.438 [2024-10-08 18:28:39.162323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.163975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.164020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:50.438 [2024-10-08 18:28:39.164034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:17:50.438 [2024-10-08 18:28:39.164044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.164450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.164482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:50.438 [2024-10-08 18:28:39.164496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:50.438 [2024-10-08 18:28:39.164519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.183064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.183143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:50.438 [2024-10-08 18:28:39.183167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.504 ms 00:17:50.438 [2024-10-08 18:28:39.183179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.191254] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:50.438 [2024-10-08 18:28:39.209094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.209161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:50.438 [2024-10-08 18:28:39.209181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.777 ms 00:17:50.438 [2024-10-08 18:28:39.209193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.209360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.209378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:50.438 [2024-10-08 18:28:39.209393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:50.438 [2024-10-08 18:28:39.209419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.209502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.209538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:50.438 [2024-10-08 18:28:39.209553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:50.438 [2024-10-08 18:28:39.209565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.209604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.209626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:50.438 [2024-10-08 18:28:39.209640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:50.438 [2024-10-08 18:28:39.209657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.209718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:50.438 [2024-10-08 18:28:39.209747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.209782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:50.438 [2024-10-08 18:28:39.209796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:50.438 [2024-10-08 18:28:39.209809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.213628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.213675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:50.438 [2024-10-08 18:28:39.213691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.783 ms 00:17:50.438 [2024-10-08 18:28:39.213703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.213915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.438 [2024-10-08 18:28:39.213947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:50.438 [2024-10-08 18:28:39.213962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:50.438 [2024-10-08 18:28:39.213974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.438 [2024-10-08 18:28:39.214922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.438 [2024-10-08 18:28:39.215983] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 132.579 ms, result 0 00:17:50.438 [2024-10-08 18:28:39.216734] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.438 [2024-10-08 18:28:39.226253] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.822  [2024-10-08T18:28:41.615Z] Copying: 25/256 [MB] (25 MBps) [2024-10-08T18:28:42.561Z] Copying: 54/256 [MB] (28 MBps) [2024-10-08T18:28:43.496Z] Copying: 79/256 [MB] (25 MBps) [2024-10-08T18:28:44.429Z] Copying: 117/256 [MB] (37 MBps) [2024-10-08T18:28:45.364Z] Copying: 159/256 [MB] (41 MBps) [2024-10-08T18:28:46.299Z] Copying: 200/256 [MB] (41 MBps) [2024-10-08T18:28:46.871Z] Copying: 242/256 [MB] (41 MBps) [2024-10-08T18:28:46.871Z] Copying: 256/256 [MB] (average 34 MBps)[2024-10-08 18:28:46.820512] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:58.021 [2024-10-08 18:28:46.822466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.021 [2024-10-08 18:28:46.822536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:58.021 [2024-10-08 18:28:46.822560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:58.021 [2024-10-08 18:28:46.822575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.021 [2024-10-08 18:28:46.822616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:58.021 [2024-10-08 18:28:46.823333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.021 [2024-10-08 18:28:46.823380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:58.021 [2024-10-08 18:28:46.823399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:17:58.021 [2024-10-08 18:28:46.823416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.021 [2024-10-08 18:28:46.823949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.021 [2024-10-08 18:28:46.823985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:58.021 [2024-10-08 18:28:46.824002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:17:58.021 [2024-10-08 18:28:46.824030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.021 [2024-10-08 18:28:46.829046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.021 [2024-10-08 18:28:46.829078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:58.021 [2024-10-08 18:28:46.829087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.983 ms 00:17:58.021 [2024-10-08 18:28:46.829094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.021 [2024-10-08 18:28:46.834909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.021 [2024-10-08 18:28:46.834955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:58.021 [2024-10-08 18:28:46.834965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:17:58.021 [2024-10-08 18:28:46.834972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.836815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.836853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:58.022 [2024-10-08 18:28:46.836861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:17:58.022 [2024-10-08 18:28:46.836868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.840691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.840740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:58.022 [2024-10-08 18:28:46.840758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:17:58.022 [2024-10-08 18:28:46.840765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.840878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.840891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:58.022 [2024-10-08 18:28:46.840899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:58.022 [2024-10-08 18:28:46.840905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.842831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.842864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:58.022 [2024-10-08 18:28:46.842872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:17:58.022 [2024-10-08 18:28:46.842879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.844438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.844480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:58.022 [2024-10-08 18:28:46.844487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.539 ms 00:17:58.022 [2024-10-08 18:28:46.844494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.845938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.845983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:58.022 [2024-10-08 18:28:46.845993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:17:58.022 [2024-10-08 18:28:46.846000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.847093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.022 [2024-10-08 18:28:46.847122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:58.022 [2024-10-08 18:28:46.847130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:17:58.022 [2024-10-08 18:28:46.847136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.022 [2024-10-08 18:28:46.847153] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:58.022 [2024-10-08 18:28:46.847176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:58.022 [2024-10-08 18:28:46.847613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:58.023 [2024-10-08 18:28:46.847823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:58.023 [2024-10-08 18:28:46.847830] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6d5ed988-dd07-46b0-9a05-0af5ae35e0f7 00:17:58.023 [2024-10-08 18:28:46.847837] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:58.023 [2024-10-08 18:28:46.847844] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:58.023 [2024-10-08 18:28:46.847850] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:58.023 [2024-10-08 18:28:46.847863] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:58.023 [2024-10-08 18:28:46.847869] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:58.023 [2024-10-08 18:28:46.847882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:58.023 [2024-10-08 18:28:46.847889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:58.023 [2024-10-08 18:28:46.847894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:58.023 [2024-10-08 18:28:46.847900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:58.023 [2024-10-08 18:28:46.847906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.023 [2024-10-08 18:28:46.847915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:58.023 [2024-10-08 18:28:46.847923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:17:58.023 [2024-10-08 18:28:46.847929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.849719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.023 [2024-10-08 18:28:46.849743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:58.023 [2024-10-08 18:28:46.849762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:17:58.023 [2024-10-08 18:28:46.849775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.849871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.023 [2024-10-08 18:28:46.849878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:58.023 [2024-10-08 18:28:46.849887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:58.023 [2024-10-08 18:28:46.849893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.855953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.023 [2024-10-08 18:28:46.855998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.023 [2024-10-08 18:28:46.856008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.023 [2024-10-08 18:28:46.856015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.856107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.023 [2024-10-08 18:28:46.856115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.023 [2024-10-08 18:28:46.856121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.023 [2024-10-08 18:28:46.856127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.856166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.023 [2024-10-08 18:28:46.856173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.023 [2024-10-08 18:28:46.856180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.023 [2024-10-08 18:28:46.856186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.856201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.023 [2024-10-08 18:28:46.856210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.023 [2024-10-08 18:28:46.856217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.023 [2024-10-08 18:28:46.856223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.023 [2024-10-08 18:28:46.867357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.023 [2024-10-08 18:28:46.867419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.023 [2024-10-08 18:28:46.867430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.023 [2024-10-08 18:28:46.867437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.282 [2024-10-08 18:28:46.876077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:58.282 [2024-10-08 18:28:46.876147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:58.282 [2024-10-08 18:28:46.876195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:58.282 [2024-10-08 18:28:46.876277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:58.282 [2024-10-08 18:28:46.876324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:58.282 [2024-10-08 18:28:46.876383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:58.282 [2024-10-08 18:28:46.876444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:58.282 [2024-10-08 18:28:46.876453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:58.282 [2024-10-08 18:28:46.876459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.282 [2024-10-08 18:28:46.876588] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.127 ms, result 0 00:17:58.282 00:17:58.282 00:17:58.282 18:28:47 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.851 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:58.851 18:28:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:59.113 18:28:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 87041 00:17:59.113 18:28:47 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 87041 ']' 00:17:59.113 18:28:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 87041 00:17:59.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87041) - No such process 00:17:59.113 Process with pid 87041 is not found 00:17:59.113 18:28:47 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 87041 is not found' 00:17:59.113 00:17:59.113 real 1m1.878s 00:17:59.113 user 1m25.973s 00:17:59.113 sys 0m6.200s 00:17:59.113 18:28:47 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:59.113 ************************************ 00:17:59.113 END TEST ftl_trim 00:17:59.113 18:28:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:59.113 ************************************ 00:17:59.113 18:28:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:59.113 18:28:47 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:17:59.113 18:28:47 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:59.113 18:28:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:59.113 ************************************ 00:17:59.113 START TEST ftl_restore 00:17:59.113 ************************************ 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:17:59.113 * Looking for test storage... 00:17:59.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.113 18:28:47 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:59.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.113 --rc genhtml_branch_coverage=1 00:17:59.113 --rc genhtml_function_coverage=1 00:17:59.113 --rc genhtml_legend=1 00:17:59.113 --rc geninfo_all_blocks=1 00:17:59.113 --rc geninfo_unexecuted_blocks=1 00:17:59.113 00:17:59.113 ' 00:17:59.113 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:59.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.113 --rc genhtml_branch_coverage=1 00:17:59.113 --rc genhtml_function_coverage=1 00:17:59.113 --rc genhtml_legend=1 00:17:59.113 --rc geninfo_all_blocks=1 00:17:59.114 --rc geninfo_unexecuted_blocks=1 00:17:59.114 00:17:59.114 ' 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:59.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.114 --rc genhtml_branch_coverage=1 00:17:59.114 --rc genhtml_function_coverage=1 00:17:59.114 --rc genhtml_legend=1 00:17:59.114 --rc geninfo_all_blocks=1 00:17:59.114 --rc geninfo_unexecuted_blocks=1 00:17:59.114 00:17:59.114 ' 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:59.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.114 --rc genhtml_branch_coverage=1 00:17:59.114 --rc genhtml_function_coverage=1 00:17:59.114 --rc genhtml_legend=1 00:17:59.114 --rc geninfo_all_blocks=1 00:17:59.114 --rc geninfo_unexecuted_blocks=1 00:17:59.114 00:17:59.114 ' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.kOLBOCajO8 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=87236 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 87236 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 87236 ']' 00:17:59.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:59.114 18:28:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:17:59.114 18:28:47 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.409 [2024-10-08 18:28:48.013164] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:17:59.409 [2024-10-08 18:28:48.013296] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87236 ] 00:17:59.409 [2024-10-08 18:28:48.142851] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:59.409 [2024-10-08 18:28:48.165676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.409 [2024-10-08 18:28:48.210220] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.366 18:28:48 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:00.366 18:28:48 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:00.366 18:28:48 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:00.366 18:28:49 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:00.366 18:28:49 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:00.366 18:28:49 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:00.366 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:00.366 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:00.366 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:00.366 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:00.366 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:00.706 { 00:18:00.706 "name": "nvme0n1", 00:18:00.706 "aliases": [ 00:18:00.706 "3e241dd6-6a3f-4e80-be3e-8d6c279d523d" 00:18:00.706 ], 00:18:00.706 "product_name": "NVMe disk", 00:18:00.706 "block_size": 4096, 00:18:00.706 "num_blocks": 1310720, 00:18:00.706 "uuid": "3e241dd6-6a3f-4e80-be3e-8d6c279d523d", 00:18:00.706 "numa_id": -1, 00:18:00.706 "assigned_rate_limits": { 00:18:00.706 "rw_ios_per_sec": 0, 00:18:00.706 "rw_mbytes_per_sec": 0, 00:18:00.706 "r_mbytes_per_sec": 0, 00:18:00.706 "w_mbytes_per_sec": 0 00:18:00.706 }, 00:18:00.706 "claimed": true, 00:18:00.706 "claim_type": "read_many_write_one", 00:18:00.706 "zoned": false, 00:18:00.706 "supported_io_types": { 00:18:00.706 "read": true, 00:18:00.706 "write": true, 00:18:00.706 "unmap": true, 00:18:00.706 "flush": true, 00:18:00.706 "reset": true, 00:18:00.706 "nvme_admin": true, 00:18:00.706 "nvme_io": true, 00:18:00.706 "nvme_io_md": false, 00:18:00.706 "write_zeroes": true, 00:18:00.706 "zcopy": false, 00:18:00.706 "get_zone_info": false, 00:18:00.706 "zone_management": false, 00:18:00.706 "zone_append": false, 00:18:00.706 "compare": true, 00:18:00.706 "compare_and_write": false, 00:18:00.706 "abort": true, 00:18:00.706 "seek_hole": false, 00:18:00.706 "seek_data": false, 00:18:00.706 "copy": true, 00:18:00.706 "nvme_iov_md": false 00:18:00.706 }, 00:18:00.706 "driver_specific": { 00:18:00.706 "nvme": [ 00:18:00.706 { 00:18:00.706 "pci_address": "0000:00:11.0", 00:18:00.706 "trid": { 00:18:00.706 "trtype": "PCIe", 00:18:00.706 "traddr": "0000:00:11.0" 00:18:00.706 }, 00:18:00.706 "ctrlr_data": { 00:18:00.706 "cntlid": 0, 00:18:00.706 "vendor_id": "0x1b36", 00:18:00.706 "model_number": "QEMU NVMe Ctrl", 00:18:00.706 "serial_number": "12341", 00:18:00.706 "firmware_revision": "8.0.0", 00:18:00.706 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.706 "oacs": { 00:18:00.706 "security": 0, 00:18:00.706 "format": 1, 00:18:00.706 "firmware": 0, 00:18:00.706 "ns_manage": 1 00:18:00.706 }, 00:18:00.706 "multi_ctrlr": false, 00:18:00.706 "ana_reporting": false 00:18:00.706 }, 00:18:00.706 "vs": { 00:18:00.706 "nvme_version": "1.4" 00:18:00.706 }, 00:18:00.706 "ns_data": { 00:18:00.706 "id": 1, 00:18:00.706 "can_share": false 00:18:00.706 } 00:18:00.706 } 00:18:00.706 ], 00:18:00.706 "mp_policy": "active_passive" 00:18:00.706 } 00:18:00.706 } 00:18:00.706 ]' 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:00.706 18:28:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:00.706 18:28:49 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:00.706 18:28:49 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.706 18:28:49 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:00.706 18:28:49 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.706 18:28:49 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:00.965 18:28:49 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=ff1af36e-dd8f-4474-8ee3-b53e96158702 00:18:00.965 18:28:49 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:00.965 18:28:49 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ff1af36e-dd8f-4474-8ee3-b53e96158702 00:18:01.223 18:28:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:01.223 18:28:50 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=faf4d8ce-be28-4090-9bec-9e09ba74de44 00:18:01.223 18:28:50 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u faf4d8ce-be28-4090-9bec-9e09ba74de44 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:01.481 18:28:50 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.481 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.481 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:01.481 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:01.481 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:01.481 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:01.740 { 00:18:01.740 "name": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:01.740 "aliases": [ 00:18:01.740 "lvs/nvme0n1p0" 00:18:01.740 ], 00:18:01.740 "product_name": "Logical Volume", 00:18:01.740 "block_size": 4096, 00:18:01.740 "num_blocks": 26476544, 00:18:01.740 "uuid": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:01.740 "assigned_rate_limits": { 00:18:01.740 "rw_ios_per_sec": 0, 00:18:01.740 "rw_mbytes_per_sec": 0, 00:18:01.740 "r_mbytes_per_sec": 0, 00:18:01.740 "w_mbytes_per_sec": 0 00:18:01.740 }, 00:18:01.740 "claimed": false, 00:18:01.740 "zoned": false, 00:18:01.740 "supported_io_types": { 00:18:01.740 "read": true, 00:18:01.740 "write": true, 00:18:01.740 "unmap": true, 00:18:01.740 "flush": false, 00:18:01.740 "reset": true, 00:18:01.740 "nvme_admin": false, 00:18:01.740 "nvme_io": false, 00:18:01.740 "nvme_io_md": false, 00:18:01.740 "write_zeroes": true, 00:18:01.740 "zcopy": false, 00:18:01.740 "get_zone_info": false, 00:18:01.740 "zone_management": false, 00:18:01.740 "zone_append": false, 00:18:01.740 "compare": false, 00:18:01.740 "compare_and_write": false, 00:18:01.740 "abort": false, 00:18:01.740 "seek_hole": true, 00:18:01.740 "seek_data": true, 00:18:01.740 "copy": false, 00:18:01.740 "nvme_iov_md": false 00:18:01.740 }, 00:18:01.740 "driver_specific": { 00:18:01.740 "lvol": { 00:18:01.740 "lvol_store_uuid": "faf4d8ce-be28-4090-9bec-9e09ba74de44", 00:18:01.740 "base_bdev": "nvme0n1", 00:18:01.740 "thin_provision": true, 00:18:01.740 "num_allocated_clusters": 0, 00:18:01.740 "snapshot": false, 00:18:01.740 "clone": false, 00:18:01.740 "esnap_clone": false 00:18:01.740 } 00:18:01.740 } 00:18:01.740 } 00:18:01.740 ]' 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:01.740 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:01.740 18:28:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:01.740 18:28:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:01.740 18:28:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:02.001 18:28:50 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:02.001 18:28:50 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:02.001 18:28:50 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.001 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.001 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:02.001 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:02.001 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:02.001 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.262 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.262 { 00:18:02.262 "name": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:02.262 "aliases": [ 00:18:02.262 "lvs/nvme0n1p0" 00:18:02.262 ], 00:18:02.262 "product_name": "Logical Volume", 00:18:02.262 "block_size": 4096, 00:18:02.262 "num_blocks": 26476544, 00:18:02.262 "uuid": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:02.262 "assigned_rate_limits": { 00:18:02.262 "rw_ios_per_sec": 0, 00:18:02.262 "rw_mbytes_per_sec": 0, 00:18:02.262 "r_mbytes_per_sec": 0, 00:18:02.262 "w_mbytes_per_sec": 0 00:18:02.262 }, 00:18:02.262 "claimed": false, 00:18:02.262 "zoned": false, 00:18:02.262 "supported_io_types": { 00:18:02.262 "read": true, 00:18:02.262 "write": true, 00:18:02.262 "unmap": true, 00:18:02.262 "flush": false, 00:18:02.262 "reset": true, 00:18:02.262 "nvme_admin": false, 00:18:02.262 "nvme_io": false, 00:18:02.262 "nvme_io_md": false, 00:18:02.262 "write_zeroes": true, 00:18:02.262 "zcopy": false, 00:18:02.262 "get_zone_info": false, 00:18:02.262 "zone_management": false, 00:18:02.262 "zone_append": false, 00:18:02.262 "compare": false, 00:18:02.262 "compare_and_write": false, 00:18:02.262 "abort": false, 00:18:02.262 "seek_hole": true, 00:18:02.262 "seek_data": true, 00:18:02.262 "copy": false, 00:18:02.262 "nvme_iov_md": false 00:18:02.262 }, 00:18:02.262 "driver_specific": { 00:18:02.262 "lvol": { 00:18:02.262 "lvol_store_uuid": "faf4d8ce-be28-4090-9bec-9e09ba74de44", 00:18:02.262 "base_bdev": "nvme0n1", 00:18:02.262 "thin_provision": true, 00:18:02.262 "num_allocated_clusters": 0, 00:18:02.262 "snapshot": false, 00:18:02.262 "clone": false, 00:18:02.262 "esnap_clone": false 00:18:02.262 } 00:18:02.262 } 00:18:02.262 } 00:18:02.262 ]' 00:18:02.262 18:28:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.263 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.263 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.263 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.263 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.263 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.263 18:28:51 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:02.263 18:28:51 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:02.523 18:28:51 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:02.523 18:28:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.523 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.523 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:02.523 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:02.523 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:02.523 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bc618f91-3b5e-4cfd-ac49-f714891d63e3 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:02.783 { 00:18:02.783 "name": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:02.783 "aliases": [ 00:18:02.783 "lvs/nvme0n1p0" 00:18:02.783 ], 00:18:02.783 "product_name": "Logical Volume", 00:18:02.783 "block_size": 4096, 00:18:02.783 "num_blocks": 26476544, 00:18:02.783 "uuid": "bc618f91-3b5e-4cfd-ac49-f714891d63e3", 00:18:02.783 "assigned_rate_limits": { 00:18:02.783 "rw_ios_per_sec": 0, 00:18:02.783 "rw_mbytes_per_sec": 0, 00:18:02.783 "r_mbytes_per_sec": 0, 00:18:02.783 "w_mbytes_per_sec": 0 00:18:02.783 }, 00:18:02.783 "claimed": false, 00:18:02.783 "zoned": false, 00:18:02.783 "supported_io_types": { 00:18:02.783 "read": true, 00:18:02.783 "write": true, 00:18:02.783 "unmap": true, 00:18:02.783 "flush": false, 00:18:02.783 "reset": true, 00:18:02.783 "nvme_admin": false, 00:18:02.783 "nvme_io": false, 00:18:02.783 "nvme_io_md": false, 00:18:02.783 "write_zeroes": true, 00:18:02.783 "zcopy": false, 00:18:02.783 "get_zone_info": false, 00:18:02.783 "zone_management": false, 00:18:02.783 "zone_append": false, 00:18:02.783 "compare": false, 00:18:02.783 "compare_and_write": false, 00:18:02.783 "abort": false, 00:18:02.783 "seek_hole": true, 00:18:02.783 "seek_data": true, 00:18:02.783 "copy": false, 00:18:02.783 "nvme_iov_md": false 00:18:02.783 }, 00:18:02.783 "driver_specific": { 00:18:02.783 "lvol": { 00:18:02.783 "lvol_store_uuid": "faf4d8ce-be28-4090-9bec-9e09ba74de44", 00:18:02.783 "base_bdev": "nvme0n1", 00:18:02.783 "thin_provision": true, 00:18:02.783 "num_allocated_clusters": 0, 00:18:02.783 "snapshot": false, 00:18:02.783 "clone": false, 00:18:02.783 "esnap_clone": false 00:18:02.783 } 00:18:02.783 } 00:18:02.783 } 00:18:02.783 ]' 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:02.783 18:28:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d bc618f91-3b5e-4cfd-ac49-f714891d63e3 --l2p_dram_limit 10' 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:02.783 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:02.783 18:28:51 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bc618f91-3b5e-4cfd-ac49-f714891d63e3 --l2p_dram_limit 10 -c nvc0n1p0 00:18:03.045 [2024-10-08 18:28:51.776828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.776897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.045 [2024-10-08 18:28:51.776913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:03.045 [2024-10-08 18:28:51.776923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.045 [2024-10-08 18:28:51.776984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.776992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.045 [2024-10-08 18:28:51.777003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:03.045 [2024-10-08 18:28:51.777011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.045 [2024-10-08 18:28:51.777032] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.045 [2024-10-08 18:28:51.777301] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.045 [2024-10-08 18:28:51.777315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.777326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.045 [2024-10-08 18:28:51.777335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:03.045 [2024-10-08 18:28:51.777341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.045 [2024-10-08 18:28:51.777371] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cb48738f-b06a-4f62-98b8-1a25e482c773 00:18:03.045 [2024-10-08 18:28:51.778717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.778744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:03.045 [2024-10-08 18:28:51.778762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:03.045 [2024-10-08 18:28:51.778773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.045 [2024-10-08 18:28:51.785634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.785672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.045 [2024-10-08 18:28:51.785686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.794 ms 00:18:03.045 [2024-10-08 18:28:51.785697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.045 [2024-10-08 18:28:51.785787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.045 [2024-10-08 18:28:51.785797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.045 [2024-10-08 18:28:51.785807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:03.046 [2024-10-08 18:28:51.785815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.785860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.046 [2024-10-08 18:28:51.785873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.046 [2024-10-08 18:28:51.785879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:03.046 [2024-10-08 18:28:51.785886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.785911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.046 [2024-10-08 18:28:51.787543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.046 [2024-10-08 18:28:51.787572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.046 [2024-10-08 18:28:51.787584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.640 ms 00:18:03.046 [2024-10-08 18:28:51.787590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.787621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.046 [2024-10-08 18:28:51.787627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.046 [2024-10-08 18:28:51.787637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:03.046 [2024-10-08 18:28:51.787643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.787658] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:03.046 [2024-10-08 18:28:51.787786] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:03.046 [2024-10-08 18:28:51.787797] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.046 [2024-10-08 18:28:51.787806] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:03.046 [2024-10-08 18:28:51.787816] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.046 [2024-10-08 18:28:51.787826] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.046 [2024-10-08 18:28:51.787840] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:03.046 [2024-10-08 18:28:51.787850] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.046 [2024-10-08 18:28:51.787858] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:03.046 [2024-10-08 18:28:51.787866] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:03.046 [2024-10-08 18:28:51.787874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.046 [2024-10-08 18:28:51.787880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.046 [2024-10-08 18:28:51.787887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:03.046 [2024-10-08 18:28:51.787893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.787976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.046 [2024-10-08 18:28:51.787982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.046 [2024-10-08 18:28:51.787990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:03.046 [2024-10-08 18:28:51.787998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.046 [2024-10-08 18:28:51.788075] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.046 [2024-10-08 18:28:51.788083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.046 [2024-10-08 18:28:51.788090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.046 [2024-10-08 18:28:51.788109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.046 [2024-10-08 18:28:51.788128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.046 [2024-10-08 18:28:51.788141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.046 [2024-10-08 18:28:51.788146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:03.046 [2024-10-08 18:28:51.788154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.046 [2024-10-08 18:28:51.788159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.046 [2024-10-08 18:28:51.788166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:03.046 [2024-10-08 18:28:51.788171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.046 [2024-10-08 18:28:51.788185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.046 [2024-10-08 18:28:51.788207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.046 [2024-10-08 18:28:51.788226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.046 [2024-10-08 18:28:51.788247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.046 [2024-10-08 18:28:51.788267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.046 [2024-10-08 18:28:51.788288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.046 [2024-10-08 18:28:51.788301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.046 [2024-10-08 18:28:51.788307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:03.046 [2024-10-08 18:28:51.788316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.046 [2024-10-08 18:28:51.788322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:03.046 [2024-10-08 18:28:51.788329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:03.046 [2024-10-08 18:28:51.788334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:03.046 [2024-10-08 18:28:51.788348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:03.046 [2024-10-08 18:28:51.788355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788360] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.046 [2024-10-08 18:28:51.788373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.046 [2024-10-08 18:28:51.788379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.046 [2024-10-08 18:28:51.788394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.046 [2024-10-08 18:28:51.788404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.046 [2024-10-08 18:28:51.788410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.046 [2024-10-08 18:28:51.788417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.046 [2024-10-08 18:28:51.788423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.046 [2024-10-08 18:28:51.788431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.046 [2024-10-08 18:28:51.788440] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.046 [2024-10-08 18:28:51.788453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:03.046 [2024-10-08 18:28:51.788469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:03.046 [2024-10-08 18:28:51.788475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:03.046 [2024-10-08 18:28:51.788484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:03.046 [2024-10-08 18:28:51.788490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:03.046 [2024-10-08 18:28:51.788500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:03.046 [2024-10-08 18:28:51.788506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:03.046 [2024-10-08 18:28:51.788514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:03.046 [2024-10-08 18:28:51.788520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:03.046 [2024-10-08 18:28:51.788528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:03.046 [2024-10-08 18:28:51.788564] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.046 [2024-10-08 18:28:51.788574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.046 [2024-10-08 18:28:51.788589] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.047 [2024-10-08 18:28:51.788595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.047 [2024-10-08 18:28:51.788603] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.047 [2024-10-08 18:28:51.788609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.047 [2024-10-08 18:28:51.788618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.047 [2024-10-08 18:28:51.788623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:18:03.047 [2024-10-08 18:28:51.788630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.047 [2024-10-08 18:28:51.788663] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:03.047 [2024-10-08 18:28:51.788672] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:05.602 [2024-10-08 18:28:54.274160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.274240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:05.602 [2024-10-08 18:28:54.274260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2485.489 ms 00:18:05.602 [2024-10-08 18:28:54.274277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.284932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.284988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.602 [2024-10-08 18:28:54.285002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.566 ms 00:18:05.602 [2024-10-08 18:28:54.285015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.285126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.285143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.602 [2024-10-08 18:28:54.285154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:05.602 [2024-10-08 18:28:54.285164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.294765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.294817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.602 [2024-10-08 18:28:54.294829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.548 ms 00:18:05.602 [2024-10-08 18:28:54.294844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.294885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.294898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.602 [2024-10-08 18:28:54.294906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:05.602 [2024-10-08 18:28:54.294915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.295317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.295337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.602 [2024-10-08 18:28:54.295346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:05.602 [2024-10-08 18:28:54.295358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.295478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.295489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.602 [2024-10-08 18:28:54.295500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:05.602 [2024-10-08 18:28:54.295510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.315154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.315228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.602 [2024-10-08 18:28:54.315247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.603 ms 00:18:05.602 [2024-10-08 18:28:54.315261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.327053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:05.602 [2024-10-08 18:28:54.330218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.330249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.602 [2024-10-08 18:28:54.330263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.786 ms 00:18:05.602 [2024-10-08 18:28:54.330270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.379661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.379721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:05.602 [2024-10-08 18:28:54.379739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.352 ms 00:18:05.602 [2024-10-08 18:28:54.379763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.379961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.379971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.602 [2024-10-08 18:28:54.379986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:18:05.602 [2024-10-08 18:28:54.379994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.382899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.382936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:05.602 [2024-10-08 18:28:54.382950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:18:05.602 [2024-10-08 18:28:54.382959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.385405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.385434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:05.602 [2024-10-08 18:28:54.385447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.405 ms 00:18:05.602 [2024-10-08 18:28:54.385454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.602 [2024-10-08 18:28:54.385777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.602 [2024-10-08 18:28:54.385792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.602 [2024-10-08 18:28:54.385805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:05.602 [2024-10-08 18:28:54.385814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.413892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.413937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:05.603 [2024-10-08 18:28:54.413952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.052 ms 00:18:05.603 [2024-10-08 18:28:54.413960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.417772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.417805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:05.603 [2024-10-08 18:28:54.417818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.738 ms 00:18:05.603 [2024-10-08 18:28:54.417826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.420688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.420892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:05.603 [2024-10-08 18:28:54.420913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.827 ms 00:18:05.603 [2024-10-08 18:28:54.420921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.423962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.423993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.603 [2024-10-08 18:28:54.424009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:18:05.603 [2024-10-08 18:28:54.424018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.424058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.424068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.603 [2024-10-08 18:28:54.424085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:05.603 [2024-10-08 18:28:54.424094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.424173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.603 [2024-10-08 18:28:54.424182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.603 [2024-10-08 18:28:54.424192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:05.603 [2024-10-08 18:28:54.424199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.603 [2024-10-08 18:28:54.425304] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2647.921 ms, result 0 00:18:05.603 { 00:18:05.603 "name": "ftl0", 00:18:05.603 "uuid": "cb48738f-b06a-4f62-98b8-1a25e482c773" 00:18:05.603 } 00:18:05.603 18:28:54 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:05.603 18:28:54 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:05.861 18:28:54 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:05.861 18:28:54 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:06.121 [2024-10-08 18:28:54.840889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.841159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:06.121 [2024-10-08 18:28:54.841219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:06.121 [2024-10-08 18:28:54.841245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.841294] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:06.121 [2024-10-08 18:28:54.841951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.842052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:06.121 [2024-10-08 18:28:54.842103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:18:06.121 [2024-10-08 18:28:54.842126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.842405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.842477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:06.121 [2024-10-08 18:28:54.842524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:06.121 [2024-10-08 18:28:54.842546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.846013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.846092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:06.121 [2024-10-08 18:28:54.846145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.430 ms 00:18:06.121 [2024-10-08 18:28:54.846167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.852415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.852511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:06.121 [2024-10-08 18:28:54.852565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.213 ms 00:18:06.121 [2024-10-08 18:28:54.852587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.854252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.854354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:06.121 [2024-10-08 18:28:54.854411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.570 ms 00:18:06.121 [2024-10-08 18:28:54.854433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.858576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.858687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:06.121 [2024-10-08 18:28:54.858742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.060 ms 00:18:06.121 [2024-10-08 18:28:54.858807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.858943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.858968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:06.121 [2024-10-08 18:28:54.859015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:06.121 [2024-10-08 18:28:54.859037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.860810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.860901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:06.121 [2024-10-08 18:28:54.860953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.738 ms 00:18:06.121 [2024-10-08 18:28:54.860975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.862347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.862439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:06.121 [2024-10-08 18:28:54.862490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:18:06.121 [2024-10-08 18:28:54.862512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.863499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.863586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:06.121 [2024-10-08 18:28:54.863637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:18:06.121 [2024-10-08 18:28:54.863658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.864997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.121 [2024-10-08 18:28:54.865087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:06.121 [2024-10-08 18:28:54.865139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.168 ms 00:18:06.121 [2024-10-08 18:28:54.865188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.121 [2024-10-08 18:28:54.865235] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:06.121 [2024-10-08 18:28:54.865291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:06.121 [2024-10-08 18:28:54.865681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.865994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.866970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:06.122 [2024-10-08 18:28:54.867506] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:06.122 [2024-10-08 18:28:54.867517] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb48738f-b06a-4f62-98b8-1a25e482c773 00:18:06.122 [2024-10-08 18:28:54.867526] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:06.122 [2024-10-08 18:28:54.867535] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:06.122 [2024-10-08 18:28:54.867543] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:06.123 [2024-10-08 18:28:54.867553] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:06.123 [2024-10-08 18:28:54.867560] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:06.123 [2024-10-08 18:28:54.867570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:06.123 [2024-10-08 18:28:54.867577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:06.123 [2024-10-08 18:28:54.867586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:06.123 [2024-10-08 18:28:54.867593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:06.123 [2024-10-08 18:28:54.867602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.123 [2024-10-08 18:28:54.867612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:06.123 [2024-10-08 18:28:54.867623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:18:06.123 [2024-10-08 18:28:54.867630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.869515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.123 [2024-10-08 18:28:54.869537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:06.123 [2024-10-08 18:28:54.869549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:18:06.123 [2024-10-08 18:28:54.869556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.869655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.123 [2024-10-08 18:28:54.869664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:06.123 [2024-10-08 18:28:54.869674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:18:06.123 [2024-10-08 18:28:54.869681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.875974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.876008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:06.123 [2024-10-08 18:28:54.876020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.876028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.876099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.876108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:06.123 [2024-10-08 18:28:54.876118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.876125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.876200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.876211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:06.123 [2024-10-08 18:28:54.876221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.876229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.876249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.876259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:06.123 [2024-10-08 18:28:54.876268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.876276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.887643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.887692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:06.123 [2024-10-08 18:28:54.887705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.887714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:06.123 [2024-10-08 18:28:54.897191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.123 [2024-10-08 18:28:54.897309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.123 [2024-10-08 18:28:54.897376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.123 [2024-10-08 18:28:54.897488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:06.123 [2024-10-08 18:28:54.897546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.123 [2024-10-08 18:28:54.897621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:06.123 [2024-10-08 18:28:54.897684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.123 [2024-10-08 18:28:54.897696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:06.123 [2024-10-08 18:28:54.897704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.123 [2024-10-08 18:28:54.897872] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.941 ms, result 0 00:18:06.123 true 00:18:06.123 18:28:54 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 87236 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87236 ']' 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87236 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87236 00:18:06.123 killing process with pid 87236 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87236' 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 87236 00:18:06.123 18:28:54 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 87236 00:18:12.702 18:29:00 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:16.887 262144+0 records in 00:18:16.887 262144+0 records out 00:18:16.887 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.96851 s, 271 MB/s 00:18:16.887 18:29:04 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:17.820 18:29:06 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.820 [2024-10-08 18:29:06.639527] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:18:17.820 [2024-10-08 18:29:06.639651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87438 ] 00:18:18.079 [2024-10-08 18:29:06.768990] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:18.079 [2024-10-08 18:29:06.788207] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.079 [2024-10-08 18:29:06.831699] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.339 [2024-10-08 18:29:06.932061] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.339 [2024-10-08 18:29:06.932146] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.339 [2024-10-08 18:29:07.083942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.084010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:18.339 [2024-10-08 18:29:07.084025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:18.339 [2024-10-08 18:29:07.084032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.084081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.084089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.339 [2024-10-08 18:29:07.084099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:18.339 [2024-10-08 18:29:07.084105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.084123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:18.339 [2024-10-08 18:29:07.084330] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:18.339 [2024-10-08 18:29:07.084342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.084350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.339 [2024-10-08 18:29:07.084357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:18.339 [2024-10-08 18:29:07.084365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.085695] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:18.339 [2024-10-08 18:29:07.088223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.088255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:18.339 [2024-10-08 18:29:07.088265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:18:18.339 [2024-10-08 18:29:07.088272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.088329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.088337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:18.339 [2024-10-08 18:29:07.088347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:18.339 [2024-10-08 18:29:07.088353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.094558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.094589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.339 [2024-10-08 18:29:07.094605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.157 ms 00:18:18.339 [2024-10-08 18:29:07.094620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.094680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.094691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.339 [2024-10-08 18:29:07.094700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:18.339 [2024-10-08 18:29:07.094707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.094769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.094777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:18.339 [2024-10-08 18:29:07.094785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:18.339 [2024-10-08 18:29:07.094791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.094816] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.339 [2024-10-08 18:29:07.096337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.096534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.339 [2024-10-08 18:29:07.096547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:18:18.339 [2024-10-08 18:29:07.096554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.096584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.096592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:18.339 [2024-10-08 18:29:07.096599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:18.339 [2024-10-08 18:29:07.096605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.096632] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:18.339 [2024-10-08 18:29:07.096654] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:18.339 [2024-10-08 18:29:07.096684] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:18.339 [2024-10-08 18:29:07.096697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:18.339 [2024-10-08 18:29:07.096794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:18.339 [2024-10-08 18:29:07.096808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:18.339 [2024-10-08 18:29:07.096816] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:18.339 [2024-10-08 18:29:07.096830] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:18.339 [2024-10-08 18:29:07.096838] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:18.339 [2024-10-08 18:29:07.096844] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:18.339 [2024-10-08 18:29:07.096851] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:18.339 [2024-10-08 18:29:07.096857] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:18.339 [2024-10-08 18:29:07.096862] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:18.339 [2024-10-08 18:29:07.096868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.096874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:18.339 [2024-10-08 18:29:07.096884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:18.339 [2024-10-08 18:29:07.096889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.096957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.339 [2024-10-08 18:29:07.096967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:18.339 [2024-10-08 18:29:07.096974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:18.339 [2024-10-08 18:29:07.096979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.339 [2024-10-08 18:29:07.097054] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:18.339 [2024-10-08 18:29:07.097063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:18.339 [2024-10-08 18:29:07.097070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:18.339 [2024-10-08 18:29:07.097091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:18.339 [2024-10-08 18:29:07.097117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.339 [2024-10-08 18:29:07.097130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:18.339 [2024-10-08 18:29:07.097137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:18.339 [2024-10-08 18:29:07.097145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.339 [2024-10-08 18:29:07.097151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:18.339 [2024-10-08 18:29:07.097157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:18.339 [2024-10-08 18:29:07.097163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:18.339 [2024-10-08 18:29:07.097176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:18.339 [2024-10-08 18:29:07.097195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:18.339 [2024-10-08 18:29:07.097214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:18.339 [2024-10-08 18:29:07.097232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.339 [2024-10-08 18:29:07.097250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:18.339 [2024-10-08 18:29:07.097256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:18.339 [2024-10-08 18:29:07.097262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:18.340 [2024-10-08 18:29:07.097269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:18.340 [2024-10-08 18:29:07.097275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:18.340 [2024-10-08 18:29:07.097281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.340 [2024-10-08 18:29:07.097287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:18.340 [2024-10-08 18:29:07.097293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:18.340 [2024-10-08 18:29:07.097298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.340 [2024-10-08 18:29:07.097305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:18.340 [2024-10-08 18:29:07.097311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:18.340 [2024-10-08 18:29:07.097317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.340 [2024-10-08 18:29:07.097323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:18.340 [2024-10-08 18:29:07.097329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:18.340 [2024-10-08 18:29:07.097334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.340 [2024-10-08 18:29:07.097341] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:18.340 [2024-10-08 18:29:07.097349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:18.340 [2024-10-08 18:29:07.097357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.340 [2024-10-08 18:29:07.097372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.340 [2024-10-08 18:29:07.097379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:18.340 [2024-10-08 18:29:07.097385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:18.340 [2024-10-08 18:29:07.097391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:18.340 [2024-10-08 18:29:07.097398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:18.340 [2024-10-08 18:29:07.097404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:18.340 [2024-10-08 18:29:07.097411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:18.340 [2024-10-08 18:29:07.097418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:18.340 [2024-10-08 18:29:07.097426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:18.340 [2024-10-08 18:29:07.097441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:18.340 [2024-10-08 18:29:07.097447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:18.340 [2024-10-08 18:29:07.097454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:18.340 [2024-10-08 18:29:07.097461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:18.340 [2024-10-08 18:29:07.097469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:18.340 [2024-10-08 18:29:07.097476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:18.340 [2024-10-08 18:29:07.097482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:18.340 [2024-10-08 18:29:07.097489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:18.340 [2024-10-08 18:29:07.097495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:18.340 [2024-10-08 18:29:07.097527] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:18.340 [2024-10-08 18:29:07.097534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:18.340 [2024-10-08 18:29:07.097553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:18.340 [2024-10-08 18:29:07.097559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:18.340 [2024-10-08 18:29:07.097566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:18.340 [2024-10-08 18:29:07.097573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.097582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:18.340 [2024-10-08 18:29:07.097589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:18:18.340 [2024-10-08 18:29:07.097596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.117536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.117687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.340 [2024-10-08 18:29:07.117738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.891 ms 00:18:18.340 [2024-10-08 18:29:07.117774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.117881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.117969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.340 [2024-10-08 18:29:07.118016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:18.340 [2024-10-08 18:29:07.118051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.129893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.130041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.340 [2024-10-08 18:29:07.130082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.720 ms 00:18:18.340 [2024-10-08 18:29:07.130099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.130154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.130172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.340 [2024-10-08 18:29:07.130188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:18.340 [2024-10-08 18:29:07.130204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.130686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.130773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.340 [2024-10-08 18:29:07.130819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.379 ms 00:18:18.340 [2024-10-08 18:29:07.130838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.130963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.131048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.340 [2024-10-08 18:29:07.131070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:18.340 [2024-10-08 18:29:07.131091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.136517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.136617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.340 [2024-10-08 18:29:07.136656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.394 ms 00:18:18.340 [2024-10-08 18:29:07.136673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.139235] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:18.340 [2024-10-08 18:29:07.139341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:18.340 [2024-10-08 18:29:07.139440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.139458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:18.340 [2024-10-08 18:29:07.139474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:18:18.340 [2024-10-08 18:29:07.139495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.151123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.151272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:18.340 [2024-10-08 18:29:07.151327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.278 ms 00:18:18.340 [2024-10-08 18:29:07.151346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.153234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.153342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:18.340 [2024-10-08 18:29:07.153397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.584 ms 00:18:18.340 [2024-10-08 18:29:07.153416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.154461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.154545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:18.340 [2024-10-08 18:29:07.154588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:18:18.340 [2024-10-08 18:29:07.154607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.154912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.154981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.340 [2024-10-08 18:29:07.155021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:18:18.340 [2024-10-08 18:29:07.155039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.171607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.171863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:18.340 [2024-10-08 18:29:07.171923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.539 ms 00:18:18.340 [2024-10-08 18:29:07.171942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.340 [2024-10-08 18:29:07.178114] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:18.340 [2024-10-08 18:29:07.180718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.340 [2024-10-08 18:29:07.180863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.340 [2024-10-08 18:29:07.180910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.731 ms 00:18:18.340 [2024-10-08 18:29:07.180933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.181008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.181032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:18.341 [2024-10-08 18:29:07.181270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:18.341 [2024-10-08 18:29:07.181288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.181420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.181526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.341 [2024-10-08 18:29:07.181586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:18.341 [2024-10-08 18:29:07.181603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.181643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.181661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.341 [2024-10-08 18:29:07.181676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:18.341 [2024-10-08 18:29:07.181691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.181731] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:18.341 [2024-10-08 18:29:07.181759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.181821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:18.341 [2024-10-08 18:29:07.181840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:18.341 [2024-10-08 18:29:07.181855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.185218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.185313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.341 [2024-10-08 18:29:07.185380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.335 ms 00:18:18.341 [2024-10-08 18:29:07.185401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.341 [2024-10-08 18:29:07.185469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.341 [2024-10-08 18:29:07.185489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.341 [2024-10-08 18:29:07.185506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:18.341 [2024-10-08 18:29:07.185520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.599 [2024-10-08 18:29:07.186489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.161 ms, result 0 00:18:19.529  [2024-10-08T18:29:09.313Z] Copying: 35/1024 [MB] (35 MBps) [2024-10-08T18:29:10.246Z] Copying: 70/1024 [MB] (34 MBps) [2024-10-08T18:29:11.619Z] Copying: 114/1024 [MB] (43 MBps) [2024-10-08T18:29:12.553Z] Copying: 156/1024 [MB] (42 MBps) [2024-10-08T18:29:13.493Z] Copying: 203/1024 [MB] (46 MBps) [2024-10-08T18:29:14.436Z] Copying: 240/1024 [MB] (37 MBps) [2024-10-08T18:29:15.370Z] Copying: 275/1024 [MB] (35 MBps) [2024-10-08T18:29:16.307Z] Copying: 315/1024 [MB] (39 MBps) [2024-10-08T18:29:17.248Z] Copying: 357/1024 [MB] (42 MBps) [2024-10-08T18:29:18.622Z] Copying: 394/1024 [MB] (36 MBps) [2024-10-08T18:29:19.556Z] Copying: 431/1024 [MB] (37 MBps) [2024-10-08T18:29:20.495Z] Copying: 475/1024 [MB] (43 MBps) [2024-10-08T18:29:21.437Z] Copying: 512/1024 [MB] (36 MBps) [2024-10-08T18:29:22.377Z] Copying: 543/1024 [MB] (30 MBps) [2024-10-08T18:29:23.319Z] Copying: 570/1024 [MB] (27 MBps) [2024-10-08T18:29:24.259Z] Copying: 597/1024 [MB] (27 MBps) [2024-10-08T18:29:25.202Z] Copying: 619/1024 [MB] (21 MBps) [2024-10-08T18:29:26.590Z] Copying: 636/1024 [MB] (17 MBps) [2024-10-08T18:29:27.534Z] Copying: 655/1024 [MB] (19 MBps) [2024-10-08T18:29:28.514Z] Copying: 677/1024 [MB] (21 MBps) [2024-10-08T18:29:29.454Z] Copying: 704/1024 [MB] (27 MBps) [2024-10-08T18:29:30.396Z] Copying: 726/1024 [MB] (22 MBps) [2024-10-08T18:29:31.339Z] Copying: 752/1024 [MB] (26 MBps) [2024-10-08T18:29:32.277Z] Copying: 779/1024 [MB] (26 MBps) [2024-10-08T18:29:33.215Z] Copying: 799/1024 [MB] (19 MBps) [2024-10-08T18:29:34.597Z] Copying: 820/1024 [MB] (21 MBps) [2024-10-08T18:29:35.541Z] Copying: 832/1024 [MB] (11 MBps) [2024-10-08T18:29:36.486Z] Copying: 847/1024 [MB] (15 MBps) [2024-10-08T18:29:37.440Z] Copying: 866/1024 [MB] (18 MBps) [2024-10-08T18:29:38.397Z] Copying: 887/1024 [MB] (20 MBps) [2024-10-08T18:29:39.337Z] Copying: 902/1024 [MB] (14 MBps) [2024-10-08T18:29:40.278Z] Copying: 918/1024 [MB] (15 MBps) [2024-10-08T18:29:41.220Z] Copying: 933/1024 [MB] (14 MBps) [2024-10-08T18:29:42.598Z] Copying: 945/1024 [MB] (12 MBps) [2024-10-08T18:29:43.542Z] Copying: 962/1024 [MB] (17 MBps) [2024-10-08T18:29:44.485Z] Copying: 978/1024 [MB] (15 MBps) [2024-10-08T18:29:45.431Z] Copying: 991/1024 [MB] (13 MBps) [2024-10-08T18:29:46.366Z] Copying: 1001/1024 [MB] (10 MBps) [2024-10-08T18:29:46.932Z] Copying: 1013/1024 [MB] (11 MBps) [2024-10-08T18:29:46.932Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-08 18:29:46.927130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.082 [2024-10-08 18:29:46.927183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.082 [2024-10-08 18:29:46.927197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:58.082 [2024-10-08 18:29:46.927206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.082 [2024-10-08 18:29:46.927227] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:58.082 [2024-10-08 18:29:46.927672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.082 [2024-10-08 18:29:46.927688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.082 [2024-10-08 18:29:46.927697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:58.082 [2024-10-08 18:29:46.927705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.082 [2024-10-08 18:29:46.929738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.082 [2024-10-08 18:29:46.929778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.082 [2024-10-08 18:29:46.929794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.014 ms 00:18:58.082 [2024-10-08 18:29:46.929801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.946935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.946996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.342 [2024-10-08 18:29:46.947015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.116 ms 00:18:58.342 [2024-10-08 18:29:46.947026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.953131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.953318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.342 [2024-10-08 18:29:46.953335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.069 ms 00:18:58.342 [2024-10-08 18:29:46.953343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.954895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.954930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.342 [2024-10-08 18:29:46.954939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.489 ms 00:18:58.342 [2024-10-08 18:29:46.954946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.958588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.958630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.342 [2024-10-08 18:29:46.958640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:18:58.342 [2024-10-08 18:29:46.958647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.958765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.958775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.342 [2024-10-08 18:29:46.958783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:58.342 [2024-10-08 18:29:46.958790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.961037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.961069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:58.342 [2024-10-08 18:29:46.961079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.232 ms 00:18:58.342 [2024-10-08 18:29:46.961087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.962742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.962781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:58.342 [2024-10-08 18:29:46.962791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.627 ms 00:18:58.342 [2024-10-08 18:29:46.962797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.964063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.964192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.342 [2024-10-08 18:29:46.964215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:18:58.342 [2024-10-08 18:29:46.964223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.965507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.342 [2024-10-08 18:29:46.965537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.342 [2024-10-08 18:29:46.965545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:18:58.342 [2024-10-08 18:29:46.965552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.342 [2024-10-08 18:29:46.965577] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.342 [2024-10-08 18:29:46.965591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.342 [2024-10-08 18:29:46.965614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.342 [2024-10-08 18:29:46.965622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.965998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.343 [2024-10-08 18:29:46.966257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.344 [2024-10-08 18:29:46.966375] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.344 [2024-10-08 18:29:46.966382] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb48738f-b06a-4f62-98b8-1a25e482c773 00:18:58.344 [2024-10-08 18:29:46.966390] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.344 [2024-10-08 18:29:46.966397] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.344 [2024-10-08 18:29:46.966404] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.344 [2024-10-08 18:29:46.966411] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.344 [2024-10-08 18:29:46.966418] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.344 [2024-10-08 18:29:46.966425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.344 [2024-10-08 18:29:46.966432] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.344 [2024-10-08 18:29:46.966438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.344 [2024-10-08 18:29:46.966444] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.344 [2024-10-08 18:29:46.966451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.344 [2024-10-08 18:29:46.966458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.344 [2024-10-08 18:29:46.966470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:18:58.344 [2024-10-08 18:29:46.966485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.967874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.344 [2024-10-08 18:29:46.967894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.344 [2024-10-08 18:29:46.967902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:18:58.344 [2024-10-08 18:29:46.967909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.967989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.344 [2024-10-08 18:29:46.967997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.344 [2024-10-08 18:29:46.968012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:58.344 [2024-10-08 18:29:46.968022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.972264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.972299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.344 [2024-10-08 18:29:46.972310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.972317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.972374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.972382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.344 [2024-10-08 18:29:46.972394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.972401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.972436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.972445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.344 [2024-10-08 18:29:46.972457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.972464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.972479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.972486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.344 [2024-10-08 18:29:46.972497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.972506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.981280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.981340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.344 [2024-10-08 18:29:46.981350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.981358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.344 [2024-10-08 18:29:46.988316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.344 [2024-10-08 18:29:46.988393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.344 [2024-10-08 18:29:46.988461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.344 [2024-10-08 18:29:46.988557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.344 [2024-10-08 18:29:46.988606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.344 [2024-10-08 18:29:46.988671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.344 [2024-10-08 18:29:46.988731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.344 [2024-10-08 18:29:46.988739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.344 [2024-10-08 18:29:46.988747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.344 [2024-10-08 18:29:46.988884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 61.725 ms, result 0 00:18:58.912 00:18:58.912 00:18:58.912 18:29:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:58.912 [2024-10-08 18:29:47.741918] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:18:58.912 [2024-10-08 18:29:47.742257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87862 ] 00:18:59.171 [2024-10-08 18:29:47.882492] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:59.171 [2024-10-08 18:29:47.910875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.171 [2024-10-08 18:29:47.954929] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.430 [2024-10-08 18:29:48.058668] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.430 [2024-10-08 18:29:48.058745] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:59.430 [2024-10-08 18:29:48.217259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.430 [2024-10-08 18:29:48.217528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:59.430 [2024-10-08 18:29:48.217556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.430 [2024-10-08 18:29:48.217565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.430 [2024-10-08 18:29:48.217629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.430 [2024-10-08 18:29:48.217639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.431 [2024-10-08 18:29:48.217648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:59.431 [2024-10-08 18:29:48.217655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.217678] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:59.431 [2024-10-08 18:29:48.217950] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:59.431 [2024-10-08 18:29:48.217965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.217980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.431 [2024-10-08 18:29:48.217988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:18:59.431 [2024-10-08 18:29:48.217998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.219163] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:59.431 [2024-10-08 18:29:48.221404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.221440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:59.431 [2024-10-08 18:29:48.221457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:18:59.431 [2024-10-08 18:29:48.221469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.221524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.221537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:59.431 [2024-10-08 18:29:48.221548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:59.431 [2024-10-08 18:29:48.221556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.226566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.226602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.431 [2024-10-08 18:29:48.226613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.947 ms 00:18:59.431 [2024-10-08 18:29:48.226627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.226705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.226715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.431 [2024-10-08 18:29:48.226723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:59.431 [2024-10-08 18:29:48.226730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.226800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.226810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:59.431 [2024-10-08 18:29:48.226819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:59.431 [2024-10-08 18:29:48.226826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.226851] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:59.431 [2024-10-08 18:29:48.228218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.228369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.431 [2024-10-08 18:29:48.228383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:18:59.431 [2024-10-08 18:29:48.228392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.228424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.228432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:59.431 [2024-10-08 18:29:48.228439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:59.431 [2024-10-08 18:29:48.228454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.228477] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:59.431 [2024-10-08 18:29:48.228498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:59.431 [2024-10-08 18:29:48.228537] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:59.431 [2024-10-08 18:29:48.228552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:59.431 [2024-10-08 18:29:48.228654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:59.431 [2024-10-08 18:29:48.228665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:59.431 [2024-10-08 18:29:48.228679] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:59.431 [2024-10-08 18:29:48.228692] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:59.431 [2024-10-08 18:29:48.228700] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:59.431 [2024-10-08 18:29:48.228708] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:59.431 [2024-10-08 18:29:48.228716] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:59.431 [2024-10-08 18:29:48.228723] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:59.431 [2024-10-08 18:29:48.228730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:59.431 [2024-10-08 18:29:48.228741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.228763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:59.431 [2024-10-08 18:29:48.228772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:18:59.431 [2024-10-08 18:29:48.228779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.228866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.431 [2024-10-08 18:29:48.228876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:59.431 [2024-10-08 18:29:48.228887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:59.431 [2024-10-08 18:29:48.228894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.431 [2024-10-08 18:29:48.228992] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:59.431 [2024-10-08 18:29:48.229003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:59.431 [2024-10-08 18:29:48.229012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:59.431 [2024-10-08 18:29:48.229037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:59.431 [2024-10-08 18:29:48.229068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.431 [2024-10-08 18:29:48.229083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:59.431 [2024-10-08 18:29:48.229090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:59.431 [2024-10-08 18:29:48.229100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:59.431 [2024-10-08 18:29:48.229108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:59.431 [2024-10-08 18:29:48.229115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:59.431 [2024-10-08 18:29:48.229121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:59.431 [2024-10-08 18:29:48.229134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:59.431 [2024-10-08 18:29:48.229153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:59.431 [2024-10-08 18:29:48.229173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:59.431 [2024-10-08 18:29:48.229195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:59.431 [2024-10-08 18:29:48.229221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:59.431 [2024-10-08 18:29:48.229234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:59.431 [2024-10-08 18:29:48.229241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:59.431 [2024-10-08 18:29:48.229247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.431 [2024-10-08 18:29:48.229254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:59.432 [2024-10-08 18:29:48.229261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:59.432 [2024-10-08 18:29:48.229267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:59.432 [2024-10-08 18:29:48.229273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:59.432 [2024-10-08 18:29:48.229300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:59.432 [2024-10-08 18:29:48.229308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.432 [2024-10-08 18:29:48.229314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:59.432 [2024-10-08 18:29:48.229321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:59.432 [2024-10-08 18:29:48.229327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.432 [2024-10-08 18:29:48.229335] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:59.432 [2024-10-08 18:29:48.229345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:59.432 [2024-10-08 18:29:48.229355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:59.432 [2024-10-08 18:29:48.229378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:59.432 [2024-10-08 18:29:48.229386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:59.432 [2024-10-08 18:29:48.229393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:59.432 [2024-10-08 18:29:48.229400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:59.432 [2024-10-08 18:29:48.229407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:59.432 [2024-10-08 18:29:48.229414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:59.432 [2024-10-08 18:29:48.229421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:59.432 [2024-10-08 18:29:48.229428] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:59.432 [2024-10-08 18:29:48.229438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:59.432 [2024-10-08 18:29:48.229453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:59.432 [2024-10-08 18:29:48.229462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:59.432 [2024-10-08 18:29:48.229469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:59.432 [2024-10-08 18:29:48.229476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:59.432 [2024-10-08 18:29:48.229486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:59.432 [2024-10-08 18:29:48.229493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:59.432 [2024-10-08 18:29:48.229500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:59.432 [2024-10-08 18:29:48.229508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:59.432 [2024-10-08 18:29:48.229515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:59.432 [2024-10-08 18:29:48.229550] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:59.432 [2024-10-08 18:29:48.229558] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229566] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:59.432 [2024-10-08 18:29:48.229573] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:59.432 [2024-10-08 18:29:48.229581] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:59.432 [2024-10-08 18:29:48.229588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:59.432 [2024-10-08 18:29:48.229595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.229604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:59.432 [2024-10-08 18:29:48.229612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.671 ms 00:18:59.432 [2024-10-08 18:29:48.229623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.245903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.246123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.432 [2024-10-08 18:29:48.246157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.234 ms 00:18:59.432 [2024-10-08 18:29:48.246166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.246274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.246284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:59.432 [2024-10-08 18:29:48.246292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:59.432 [2024-10-08 18:29:48.246300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.254611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.254656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.432 [2024-10-08 18:29:48.254674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.236 ms 00:18:59.432 [2024-10-08 18:29:48.254682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.254731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.254740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.432 [2024-10-08 18:29:48.254771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:59.432 [2024-10-08 18:29:48.254779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.255124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.255153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.432 [2024-10-08 18:29:48.255162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:18:59.432 [2024-10-08 18:29:48.255170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.255293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.255307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.432 [2024-10-08 18:29:48.255317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:59.432 [2024-10-08 18:29:48.255325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.259988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.260025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.432 [2024-10-08 18:29:48.260035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:18:59.432 [2024-10-08 18:29:48.260042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-10-08 18:29:48.262639] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:59.432 [2024-10-08 18:29:48.262677] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:59.432 [2024-10-08 18:29:48.262691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-10-08 18:29:48.262700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:59.432 [2024-10-08 18:29:48.262715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.545 ms 00:18:59.432 [2024-10-08 18:29:48.262722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.696 [2024-10-08 18:29:48.278110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.696 [2024-10-08 18:29:48.278187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:59.696 [2024-10-08 18:29:48.278204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.327 ms 00:18:59.696 [2024-10-08 18:29:48.278213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.696 [2024-10-08 18:29:48.280408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.696 [2024-10-08 18:29:48.280586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:59.696 [2024-10-08 18:29:48.280604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:18:59.696 [2024-10-08 18:29:48.280613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.696 [2024-10-08 18:29:48.282024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.696 [2024-10-08 18:29:48.282054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:59.696 [2024-10-08 18:29:48.282064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:18:59.696 [2024-10-08 18:29:48.282072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.696 [2024-10-08 18:29:48.282410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.696 [2024-10-08 18:29:48.282427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:59.696 [2024-10-08 18:29:48.282438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:59.696 [2024-10-08 18:29:48.282446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.696 [2024-10-08 18:29:48.301223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.696 [2024-10-08 18:29:48.301305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:59.697 [2024-10-08 18:29:48.301319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.756 ms 00:18:59.697 [2024-10-08 18:29:48.301327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.309069] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:59.697 [2024-10-08 18:29:48.311980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.312015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:59.697 [2024-10-08 18:29:48.312035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.596 ms 00:18:59.697 [2024-10-08 18:29:48.312044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.312168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.312181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:59.697 [2024-10-08 18:29:48.312198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:59.697 [2024-10-08 18:29:48.312205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.312295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.312306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:59.697 [2024-10-08 18:29:48.312315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:59.697 [2024-10-08 18:29:48.312325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.312347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.312355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:59.697 [2024-10-08 18:29:48.312363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:59.697 [2024-10-08 18:29:48.312371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.312399] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:59.697 [2024-10-08 18:29:48.312409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.312416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:59.697 [2024-10-08 18:29:48.312426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:59.697 [2024-10-08 18:29:48.312435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.697 [2024-10-08 18:29:48.316056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.697 [2024-10-08 18:29:48.316105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:59.698 [2024-10-08 18:29:48.316117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.600 ms 00:18:59.698 [2024-10-08 18:29:48.316130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.698 [2024-10-08 18:29:48.316210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.698 [2024-10-08 18:29:48.316222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:59.698 [2024-10-08 18:29:48.316236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:59.698 [2024-10-08 18:29:48.316244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.698 [2024-10-08 18:29:48.317156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.490 ms, result 0 00:19:01.072  [2024-10-08T18:29:50.555Z] Copying: 14/1024 [MB] (14 MBps) [2024-10-08T18:29:51.524Z] Copying: 37/1024 [MB] (22 MBps) [2024-10-08T18:29:52.902Z] Copying: 47/1024 [MB] (10 MBps) [2024-10-08T18:29:53.842Z] Copying: 58/1024 [MB] (11 MBps) [2024-10-08T18:29:54.788Z] Copying: 70/1024 [MB] (11 MBps) [2024-10-08T18:29:55.724Z] Copying: 82296/1048576 [kB] (9908 kBps) [2024-10-08T18:29:56.668Z] Copying: 91/1024 [MB] (10 MBps) [2024-10-08T18:29:57.607Z] Copying: 103/1024 [MB] (12 MBps) [2024-10-08T18:29:58.575Z] Copying: 115/1024 [MB] (11 MBps) [2024-10-08T18:29:59.507Z] Copying: 129/1024 [MB] (13 MBps) [2024-10-08T18:30:00.891Z] Copying: 145/1024 [MB] (16 MBps) [2024-10-08T18:30:01.834Z] Copying: 157/1024 [MB] (11 MBps) [2024-10-08T18:30:02.778Z] Copying: 170/1024 [MB] (12 MBps) [2024-10-08T18:30:03.720Z] Copying: 181/1024 [MB] (11 MBps) [2024-10-08T18:30:04.660Z] Copying: 191/1024 [MB] (10 MBps) [2024-10-08T18:30:05.631Z] Copying: 204/1024 [MB] (12 MBps) [2024-10-08T18:30:06.596Z] Copying: 218/1024 [MB] (14 MBps) [2024-10-08T18:30:07.537Z] Copying: 233656/1048576 [kB] (9920 kBps) [2024-10-08T18:30:08.924Z] Copying: 243576/1048576 [kB] (9920 kBps) [2024-10-08T18:30:09.498Z] Copying: 247/1024 [MB] (10 MBps) [2024-10-08T18:30:10.885Z] Copying: 263880/1048576 [kB] (10060 kBps) [2024-10-08T18:30:11.527Z] Copying: 273140/1048576 [kB] (9260 kBps) [2024-10-08T18:30:12.914Z] Copying: 279/1024 [MB] (12 MBps) [2024-10-08T18:30:13.858Z] Copying: 295952/1048576 [kB] (9700 kBps) [2024-10-08T18:30:14.802Z] Copying: 299/1024 [MB] (10 MBps) [2024-10-08T18:30:15.749Z] Copying: 316680/1048576 [kB] (9492 kBps) [2024-10-08T18:30:16.695Z] Copying: 326424/1048576 [kB] (9744 kBps) [2024-10-08T18:30:17.641Z] Copying: 329/1024 [MB] (10 MBps) [2024-10-08T18:30:18.588Z] Copying: 347432/1048576 [kB] (9860 kBps) [2024-10-08T18:30:19.534Z] Copying: 356656/1048576 [kB] (9224 kBps) [2024-10-08T18:30:20.923Z] Copying: 366008/1048576 [kB] (9352 kBps) [2024-10-08T18:30:21.496Z] Copying: 375960/1048576 [kB] (9952 kBps) [2024-10-08T18:30:22.886Z] Copying: 377/1024 [MB] (10 MBps) [2024-10-08T18:30:23.833Z] Copying: 396532/1048576 [kB] (9852 kBps) [2024-10-08T18:30:24.777Z] Copying: 397/1024 [MB] (10 MBps) [2024-10-08T18:30:25.723Z] Copying: 419/1024 [MB] (21 MBps) [2024-10-08T18:30:26.707Z] Copying: 432/1024 [MB] (13 MBps) [2024-10-08T18:30:27.652Z] Copying: 447/1024 [MB] (14 MBps) [2024-10-08T18:30:28.596Z] Copying: 457/1024 [MB] (10 MBps) [2024-10-08T18:30:29.540Z] Copying: 467/1024 [MB] (10 MBps) [2024-10-08T18:30:30.929Z] Copying: 479/1024 [MB] (11 MBps) [2024-10-08T18:30:31.504Z] Copying: 489/1024 [MB] (10 MBps) [2024-10-08T18:30:32.894Z] Copying: 511232/1048576 [kB] (9944 kBps) [2024-10-08T18:30:33.839Z] Copying: 521112/1048576 [kB] (9880 kBps) [2024-10-08T18:30:34.784Z] Copying: 531200/1048576 [kB] (10088 kBps) [2024-10-08T18:30:35.729Z] Copying: 538/1024 [MB] (19 MBps) [2024-10-08T18:30:36.672Z] Copying: 560/1024 [MB] (22 MBps) [2024-10-08T18:30:37.615Z] Copying: 576/1024 [MB] (16 MBps) [2024-10-08T18:30:38.557Z] Copying: 608/1024 [MB] (31 MBps) [2024-10-08T18:30:39.499Z] Copying: 629/1024 [MB] (20 MBps) [2024-10-08T18:30:40.936Z] Copying: 646/1024 [MB] (16 MBps) [2024-10-08T18:30:41.510Z] Copying: 659/1024 [MB] (13 MBps) [2024-10-08T18:30:42.896Z] Copying: 673/1024 [MB] (13 MBps) [2024-10-08T18:30:43.839Z] Copying: 712/1024 [MB] (39 MBps) [2024-10-08T18:30:44.781Z] Copying: 753/1024 [MB] (41 MBps) [2024-10-08T18:30:45.724Z] Copying: 795/1024 [MB] (42 MBps) [2024-10-08T18:30:46.668Z] Copying: 825/1024 [MB] (29 MBps) [2024-10-08T18:30:47.634Z] Copying: 855556/1048576 [kB] (10232 kBps) [2024-10-08T18:30:48.575Z] Copying: 850/1024 [MB] (15 MBps) [2024-10-08T18:30:49.517Z] Copying: 868/1024 [MB] (18 MBps) [2024-10-08T18:30:50.908Z] Copying: 882/1024 [MB] (13 MBps) [2024-10-08T18:30:51.853Z] Copying: 893/1024 [MB] (10 MBps) [2024-10-08T18:30:52.798Z] Copying: 903/1024 [MB] (10 MBps) [2024-10-08T18:30:53.741Z] Copying: 935416/1048576 [kB] (9784 kBps) [2024-10-08T18:30:54.684Z] Copying: 928/1024 [MB] (14 MBps) [2024-10-08T18:30:55.628Z] Copying: 945/1024 [MB] (17 MBps) [2024-10-08T18:30:56.572Z] Copying: 967/1024 [MB] (22 MBps) [2024-10-08T18:30:57.516Z] Copying: 1002/1024 [MB] (34 MBps) [2024-10-08T18:30:57.779Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-08 18:30:57.626892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.626963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.929 [2024-10-08 18:30:57.626985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:08.929 [2024-10-08 18:30:57.627000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.627042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.929 [2024-10-08 18:30:57.627608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.627630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.929 [2024-10-08 18:30:57.627646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:08.929 [2024-10-08 18:30:57.627660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.628068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.628089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.929 [2024-10-08 18:30:57.628105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:20:08.929 [2024-10-08 18:30:57.628120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.632357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.632380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.929 [2024-10-08 18:30:57.632390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.209 ms 00:20:08.929 [2024-10-08 18:30:57.632399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.639128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.639154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:08.929 [2024-10-08 18:30:57.639166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.713 ms 00:20:08.929 [2024-10-08 18:30:57.639174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.641995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.642030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.929 [2024-10-08 18:30:57.642040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:20:08.929 [2024-10-08 18:30:57.642047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.929 [2024-10-08 18:30:57.646173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.929 [2024-10-08 18:30:57.646205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.930 [2024-10-08 18:30:57.646215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.096 ms 00:20:08.930 [2024-10-08 18:30:57.646222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.646328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.930 [2024-10-08 18:30:57.646336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.930 [2024-10-08 18:30:57.646345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:08.930 [2024-10-08 18:30:57.646352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.648982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.930 [2024-10-08 18:30:57.649012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:08.930 [2024-10-08 18:30:57.649021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:20:08.930 [2024-10-08 18:30:57.649027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.651440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.930 [2024-10-08 18:30:57.651469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:08.930 [2024-10-08 18:30:57.651477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.385 ms 00:20:08.930 [2024-10-08 18:30:57.651484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.652848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.930 [2024-10-08 18:30:57.652888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.930 [2024-10-08 18:30:57.652896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:20:08.930 [2024-10-08 18:30:57.652903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.654845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.930 [2024-10-08 18:30:57.654889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.930 [2024-10-08 18:30:57.654901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.891 ms 00:20:08.930 [2024-10-08 18:30:57.654910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.930 [2024-10-08 18:30:57.654948] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.930 [2024-10-08 18:30:57.654970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.654982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.654991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.930 [2024-10-08 18:30:57.655456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.931 [2024-10-08 18:30:57.655728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.931 [2024-10-08 18:30:57.655736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb48738f-b06a-4f62-98b8-1a25e482c773 00:20:08.931 [2024-10-08 18:30:57.655744] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.931 [2024-10-08 18:30:57.655983] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.931 [2024-10-08 18:30:57.656013] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.931 [2024-10-08 18:30:57.656034] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.931 [2024-10-08 18:30:57.656052] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.931 [2024-10-08 18:30:57.656072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.931 [2024-10-08 18:30:57.656090] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.931 [2024-10-08 18:30:57.656108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.931 [2024-10-08 18:30:57.656126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.931 [2024-10-08 18:30:57.656261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.931 [2024-10-08 18:30:57.656284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.931 [2024-10-08 18:30:57.656311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:20:08.931 [2024-10-08 18:30:57.656335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.657745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.931 [2024-10-08 18:30:57.657866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.931 [2024-10-08 18:30:57.657923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:20:08.931 [2024-10-08 18:30:57.657945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.658031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.931 [2024-10-08 18:30:57.658067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.931 [2024-10-08 18:30:57.658127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:08.931 [2024-10-08 18:30:57.658149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.662462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.662563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.931 [2024-10-08 18:30:57.662611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.662651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.662716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.662815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.931 [2024-10-08 18:30:57.662884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.662895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.662955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.662965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.931 [2024-10-08 18:30:57.662974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.662981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.662995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.663002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.931 [2024-10-08 18:30:57.663014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.663021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.671384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.671428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.931 [2024-10-08 18:30:57.671439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.671447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.678236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.678279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.931 [2024-10-08 18:30:57.678294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.678302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.678323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.678331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.931 [2024-10-08 18:30:57.678339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.678347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.678391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.678399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.931 [2024-10-08 18:30:57.678407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.678417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.678478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.678488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.931 [2024-10-08 18:30:57.678496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.931 [2024-10-08 18:30:57.678503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.931 [2024-10-08 18:30:57.678529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.931 [2024-10-08 18:30:57.678537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.931 [2024-10-08 18:30:57.678545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.932 [2024-10-08 18:30:57.678553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.932 [2024-10-08 18:30:57.678590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.932 [2024-10-08 18:30:57.678598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.932 [2024-10-08 18:30:57.678606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.932 [2024-10-08 18:30:57.678613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.932 [2024-10-08 18:30:57.678653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.932 [2024-10-08 18:30:57.678662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.932 [2024-10-08 18:30:57.678670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.932 [2024-10-08 18:30:57.678680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.932 [2024-10-08 18:30:57.678811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.886 ms, result 0 00:20:09.192 00:20:09.192 00:20:09.192 18:30:57 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:11.757 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:11.757 18:31:00 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:11.757 [2024-10-08 18:31:00.144433] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:20:11.757 [2024-10-08 18:31:00.144580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88612 ] 00:20:11.757 [2024-10-08 18:31:00.278404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:11.757 [2024-10-08 18:31:00.296724] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.757 [2024-10-08 18:31:00.354814] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.757 [2024-10-08 18:31:00.469472] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.757 [2024-10-08 18:31:00.469560] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.021 [2024-10-08 18:31:00.632173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.021 [2024-10-08 18:31:00.632239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.021 [2024-10-08 18:31:00.632258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:12.021 [2024-10-08 18:31:00.632267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.021 [2024-10-08 18:31:00.632322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.021 [2024-10-08 18:31:00.632333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.021 [2024-10-08 18:31:00.632342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:12.021 [2024-10-08 18:31:00.632355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.021 [2024-10-08 18:31:00.632378] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.021 [2024-10-08 18:31:00.632645] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.021 [2024-10-08 18:31:00.632662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.021 [2024-10-08 18:31:00.632674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.021 [2024-10-08 18:31:00.632683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:12.021 [2024-10-08 18:31:00.632697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.021 [2024-10-08 18:31:00.634494] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.021 [2024-10-08 18:31:00.638998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.021 [2024-10-08 18:31:00.639054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.021 [2024-10-08 18:31:00.639066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.507 ms 00:20:12.022 [2024-10-08 18:31:00.639074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.639166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.639179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.022 [2024-10-08 18:31:00.639194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:12.022 [2024-10-08 18:31:00.639205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.647905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.647951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.022 [2024-10-08 18:31:00.647969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.651 ms 00:20:12.022 [2024-10-08 18:31:00.647981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.648067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.648077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.022 [2024-10-08 18:31:00.648086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:12.022 [2024-10-08 18:31:00.648094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.648159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.648171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.022 [2024-10-08 18:31:00.648180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:12.022 [2024-10-08 18:31:00.648187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.648214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:12.022 [2024-10-08 18:31:00.650414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.650623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.022 [2024-10-08 18:31:00.650653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.205 ms 00:20:12.022 [2024-10-08 18:31:00.650662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.650703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.650712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.022 [2024-10-08 18:31:00.650720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:12.022 [2024-10-08 18:31:00.650733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.650785] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.022 [2024-10-08 18:31:00.650809] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.022 [2024-10-08 18:31:00.650848] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.022 [2024-10-08 18:31:00.650872] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.022 [2024-10-08 18:31:00.650979] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.022 [2024-10-08 18:31:00.650990] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.022 [2024-10-08 18:31:00.651001] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.022 [2024-10-08 18:31:00.651015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:12.022 [2024-10-08 18:31:00.651045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.022 [2024-10-08 18:31:00.651054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.022 [2024-10-08 18:31:00.651071] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.022 [2024-10-08 18:31:00.651080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.651089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.022 [2024-10-08 18:31:00.651097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:20:12.022 [2024-10-08 18:31:00.651107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.651192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.022 [2024-10-08 18:31:00.651204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.022 [2024-10-08 18:31:00.651213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:12.022 [2024-10-08 18:31:00.651222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.022 [2024-10-08 18:31:00.651326] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.022 [2024-10-08 18:31:00.651339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.022 [2024-10-08 18:31:00.651348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.022 [2024-10-08 18:31:00.651375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.022 [2024-10-08 18:31:00.651408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.022 [2024-10-08 18:31:00.651425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.022 [2024-10-08 18:31:00.651439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:12.022 [2024-10-08 18:31:00.651447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.022 [2024-10-08 18:31:00.651456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.022 [2024-10-08 18:31:00.651464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:12.022 [2024-10-08 18:31:00.651472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.022 [2024-10-08 18:31:00.651491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.022 [2024-10-08 18:31:00.651517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.022 [2024-10-08 18:31:00.651541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.022 [2024-10-08 18:31:00.651565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.022 [2024-10-08 18:31:00.651594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.022 [2024-10-08 18:31:00.651614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.022 [2024-10-08 18:31:00.651628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.022 [2024-10-08 18:31:00.651636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:12.022 [2024-10-08 18:31:00.651642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.022 [2024-10-08 18:31:00.651649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.022 [2024-10-08 18:31:00.651657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:12.022 [2024-10-08 18:31:00.651664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.022 [2024-10-08 18:31:00.651678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:12.022 [2024-10-08 18:31:00.651685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651695] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.022 [2024-10-08 18:31:00.651703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.022 [2024-10-08 18:31:00.651713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.022 [2024-10-08 18:31:00.651721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.022 [2024-10-08 18:31:00.651729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.022 [2024-10-08 18:31:00.651736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.022 [2024-10-08 18:31:00.651744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.023 [2024-10-08 18:31:00.651767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.023 [2024-10-08 18:31:00.651774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.023 [2024-10-08 18:31:00.651781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.023 [2024-10-08 18:31:00.651790] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.023 [2024-10-08 18:31:00.651800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:12.023 [2024-10-08 18:31:00.651817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:12.023 [2024-10-08 18:31:00.651825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:12.023 [2024-10-08 18:31:00.651832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:12.023 [2024-10-08 18:31:00.651843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:12.023 [2024-10-08 18:31:00.651851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:12.023 [2024-10-08 18:31:00.651858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:12.023 [2024-10-08 18:31:00.651865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:12.023 [2024-10-08 18:31:00.651874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:12.023 [2024-10-08 18:31:00.651881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:12.023 [2024-10-08 18:31:00.651922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.023 [2024-10-08 18:31:00.651930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.023 [2024-10-08 18:31:00.651947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.023 [2024-10-08 18:31:00.651956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.023 [2024-10-08 18:31:00.651964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.023 [2024-10-08 18:31:00.651974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.651983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.023 [2024-10-08 18:31:00.651991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:20:12.023 [2024-10-08 18:31:00.652002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.686809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.687071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.023 [2024-10-08 18:31:00.687360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.738 ms 00:20:12.023 [2024-10-08 18:31:00.687497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.687671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.687801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.023 [2024-10-08 18:31:00.687842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:12.023 [2024-10-08 18:31:00.688335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.701277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.701473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.023 [2024-10-08 18:31:00.701539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.758 ms 00:20:12.023 [2024-10-08 18:31:00.701562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.701618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.701652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.023 [2024-10-08 18:31:00.701673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.023 [2024-10-08 18:31:00.701693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.702303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.702455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.023 [2024-10-08 18:31:00.702515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:20:12.023 [2024-10-08 18:31:00.702538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.702712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.702735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.023 [2024-10-08 18:31:00.702836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:12.023 [2024-10-08 18:31:00.702862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.710210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.710390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.023 [2024-10-08 18:31:00.710447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.308 ms 00:20:12.023 [2024-10-08 18:31:00.710470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.714831] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:12.023 [2024-10-08 18:31:00.715020] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:12.023 [2024-10-08 18:31:00.715088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.715109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:12.023 [2024-10-08 18:31:00.715141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.499 ms 00:20:12.023 [2024-10-08 18:31:00.715160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.731892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.732059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:12.023 [2024-10-08 18:31:00.732128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.664 ms 00:20:12.023 [2024-10-08 18:31:00.732151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.735852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.736019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:12.023 [2024-10-08 18:31:00.736076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:20:12.023 [2024-10-08 18:31:00.736099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.739315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.739485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:12.023 [2024-10-08 18:31:00.739545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.141 ms 00:20:12.023 [2024-10-08 18:31:00.739567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.740467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.740705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:12.023 [2024-10-08 18:31:00.740732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:12.023 [2024-10-08 18:31:00.740775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.768737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.768820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:12.023 [2024-10-08 18:31:00.768834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.928 ms 00:20:12.023 [2024-10-08 18:31:00.768843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.777304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:12.023 [2024-10-08 18:31:00.780717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.780919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:12.023 [2024-10-08 18:31:00.780948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.816 ms 00:20:12.023 [2024-10-08 18:31:00.780965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.781051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.781067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:12.023 [2024-10-08 18:31:00.781078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:12.023 [2024-10-08 18:31:00.781086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.023 [2024-10-08 18:31:00.781180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.023 [2024-10-08 18:31:00.781192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:12.023 [2024-10-08 18:31:00.781201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:12.023 [2024-10-08 18:31:00.781214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.024 [2024-10-08 18:31:00.781236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.024 [2024-10-08 18:31:00.781250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:12.024 [2024-10-08 18:31:00.781260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:12.024 [2024-10-08 18:31:00.781271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.024 [2024-10-08 18:31:00.781308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:12.024 [2024-10-08 18:31:00.781319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.024 [2024-10-08 18:31:00.781328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:12.024 [2024-10-08 18:31:00.781338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:12.024 [2024-10-08 18:31:00.781348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.024 [2024-10-08 18:31:00.787459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.024 [2024-10-08 18:31:00.787509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:12.024 [2024-10-08 18:31:00.787522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.091 ms 00:20:12.024 [2024-10-08 18:31:00.787530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.024 [2024-10-08 18:31:00.787628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.024 [2024-10-08 18:31:00.787638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:12.024 [2024-10-08 18:31:00.787656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:12.024 [2024-10-08 18:31:00.787665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.024 [2024-10-08 18:31:00.789395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.709 ms, result 0 00:20:12.970  [2024-10-08T18:31:03.211Z] Copying: 9716/1048576 [kB] (9716 kBps) [2024-10-08T18:31:04.155Z] Copying: 29/1024 [MB] (20 MBps) [2024-10-08T18:31:05.100Z] Copying: 48/1024 [MB] (18 MBps) [2024-10-08T18:31:06.054Z] Copying: 67/1024 [MB] (18 MBps) [2024-10-08T18:31:06.995Z] Copying: 86/1024 [MB] (19 MBps) [2024-10-08T18:31:07.937Z] Copying: 111/1024 [MB] (24 MBps) [2024-10-08T18:31:08.891Z] Copying: 143/1024 [MB] (32 MBps) [2024-10-08T18:31:09.831Z] Copying: 171/1024 [MB] (28 MBps) [2024-10-08T18:31:11.219Z] Copying: 200/1024 [MB] (28 MBps) [2024-10-08T18:31:12.163Z] Copying: 221/1024 [MB] (20 MBps) [2024-10-08T18:31:13.106Z] Copying: 242/1024 [MB] (21 MBps) [2024-10-08T18:31:14.050Z] Copying: 261/1024 [MB] (19 MBps) [2024-10-08T18:31:14.995Z] Copying: 279/1024 [MB] (18 MBps) [2024-10-08T18:31:15.939Z] Copying: 301/1024 [MB] (21 MBps) [2024-10-08T18:31:16.885Z] Copying: 325/1024 [MB] (24 MBps) [2024-10-08T18:31:17.830Z] Copying: 350/1024 [MB] (24 MBps) [2024-10-08T18:31:19.216Z] Copying: 369/1024 [MB] (19 MBps) [2024-10-08T18:31:20.161Z] Copying: 389/1024 [MB] (19 MBps) [2024-10-08T18:31:21.105Z] Copying: 412/1024 [MB] (23 MBps) [2024-10-08T18:31:22.050Z] Copying: 439/1024 [MB] (26 MBps) [2024-10-08T18:31:23.049Z] Copying: 460/1024 [MB] (20 MBps) [2024-10-08T18:31:23.991Z] Copying: 481/1024 [MB] (21 MBps) [2024-10-08T18:31:24.934Z] Copying: 509/1024 [MB] (27 MBps) [2024-10-08T18:31:25.877Z] Copying: 537/1024 [MB] (28 MBps) [2024-10-08T18:31:26.821Z] Copying: 562/1024 [MB] (24 MBps) [2024-10-08T18:31:28.208Z] Copying: 587/1024 [MB] (25 MBps) [2024-10-08T18:31:29.152Z] Copying: 613/1024 [MB] (25 MBps) [2024-10-08T18:31:30.116Z] Copying: 634/1024 [MB] (21 MBps) [2024-10-08T18:31:31.077Z] Copying: 657/1024 [MB] (22 MBps) [2024-10-08T18:31:32.022Z] Copying: 677/1024 [MB] (20 MBps) [2024-10-08T18:31:32.965Z] Copying: 710/1024 [MB] (32 MBps) [2024-10-08T18:31:33.910Z] Copying: 736/1024 [MB] (25 MBps) [2024-10-08T18:31:34.852Z] Copying: 752/1024 [MB] (16 MBps) [2024-10-08T18:31:36.239Z] Copying: 768/1024 [MB] (15 MBps) [2024-10-08T18:31:36.812Z] Copying: 778/1024 [MB] (10 MBps) [2024-10-08T18:31:38.198Z] Copying: 793/1024 [MB] (15 MBps) [2024-10-08T18:31:39.141Z] Copying: 813/1024 [MB] (19 MBps) [2024-10-08T18:31:40.084Z] Copying: 828/1024 [MB] (15 MBps) [2024-10-08T18:31:41.056Z] Copying: 844/1024 [MB] (15 MBps) [2024-10-08T18:31:41.999Z] Copying: 863/1024 [MB] (18 MBps) [2024-10-08T18:31:42.944Z] Copying: 880/1024 [MB] (16 MBps) [2024-10-08T18:31:43.891Z] Copying: 898/1024 [MB] (18 MBps) [2024-10-08T18:31:44.835Z] Copying: 915/1024 [MB] (17 MBps) [2024-10-08T18:31:46.226Z] Copying: 937/1024 [MB] (22 MBps) [2024-10-08T18:31:47.169Z] Copying: 959/1024 [MB] (21 MBps) [2024-10-08T18:31:48.112Z] Copying: 982/1024 [MB] (23 MBps) [2024-10-08T18:31:49.059Z] Copying: 1004/1024 [MB] (22 MBps) [2024-10-08T18:31:50.041Z] Copying: 1022/1024 [MB] (17 MBps) [2024-10-08T18:31:50.042Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-08 18:31:49.735190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.735258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:01.192 [2024-10-08 18:31:49.735272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:01.192 [2024-10-08 18:31:49.735281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.735561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:01.192 [2024-10-08 18:31:49.736040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.736060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:01.192 [2024-10-08 18:31:49.736071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:21:01.192 [2024-10-08 18:31:49.736084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.748165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.748201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:01.192 [2024-10-08 18:31:49.748211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.756 ms 00:21:01.192 [2024-10-08 18:31:49.748219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.770866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.770901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:01.192 [2024-10-08 18:31:49.770912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.624 ms 00:21:01.192 [2024-10-08 18:31:49.770920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.777063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.777199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:01.192 [2024-10-08 18:31:49.777215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.120 ms 00:21:01.192 [2024-10-08 18:31:49.777225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.778887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.778920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:01.192 [2024-10-08 18:31:49.778929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:21:01.192 [2024-10-08 18:31:49.778936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:49.782547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:49.782587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:01.192 [2024-10-08 18:31:49.782596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.581 ms 00:21:01.192 [2024-10-08 18:31:49.782604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:50.035378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:50.035458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:01.192 [2024-10-08 18:31:50.035473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 252.732 ms 00:21:01.192 [2024-10-08 18:31:50.035482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.192 [2024-10-08 18:31:50.038386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.192 [2024-10-08 18:31:50.038427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:01.192 [2024-10-08 18:31:50.038438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:21:01.192 [2024-10-08 18:31:50.038445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.453 [2024-10-08 18:31:50.040560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.453 [2024-10-08 18:31:50.040594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:01.453 [2024-10-08 18:31:50.040605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:21:01.453 [2024-10-08 18:31:50.040613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.453 [2024-10-08 18:31:50.042110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.453 [2024-10-08 18:31:50.042247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:01.453 [2024-10-08 18:31:50.042263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.466 ms 00:21:01.453 [2024-10-08 18:31:50.042272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.453 [2024-10-08 18:31:50.044056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.453 [2024-10-08 18:31:50.044090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:01.453 [2024-10-08 18:31:50.044100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.728 ms 00:21:01.453 [2024-10-08 18:31:50.044107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.453 [2024-10-08 18:31:50.044137] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:01.453 [2024-10-08 18:31:50.044152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 97024 / 261120 wr_cnt: 1 state: open 00:21:01.453 [2024-10-08 18:31:50.044164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:01.453 [2024-10-08 18:31:50.044362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.044997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.045006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.045014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.045023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.045031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:01.454 [2024-10-08 18:31:50.045057] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:01.454 [2024-10-08 18:31:50.045066] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb48738f-b06a-4f62-98b8-1a25e482c773 00:21:01.454 [2024-10-08 18:31:50.045075] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 97024 00:21:01.454 [2024-10-08 18:31:50.045083] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 97984 00:21:01.454 [2024-10-08 18:31:50.045091] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 97024 00:21:01.454 [2024-10-08 18:31:50.045107] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0099 00:21:01.454 [2024-10-08 18:31:50.045116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:01.454 [2024-10-08 18:31:50.045124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:01.454 [2024-10-08 18:31:50.045138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:01.454 [2024-10-08 18:31:50.045145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:01.454 [2024-10-08 18:31:50.045153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:01.454 [2024-10-08 18:31:50.045161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.454 [2024-10-08 18:31:50.045170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:01.454 [2024-10-08 18:31:50.045179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:21:01.454 [2024-10-08 18:31:50.045190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.454 [2024-10-08 18:31:50.046959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.455 [2024-10-08 18:31:50.046980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:01.455 [2024-10-08 18:31:50.046990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.752 ms 00:21:01.455 [2024-10-08 18:31:50.046997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.047078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.455 [2024-10-08 18:31:50.047092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:01.455 [2024-10-08 18:31:50.047100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:01.455 [2024-10-08 18:31:50.047107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.051622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.051659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:01.455 [2024-10-08 18:31:50.051669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.051677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.051732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.051739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:01.455 [2024-10-08 18:31:50.051747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.051771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.051845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.051855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:01.455 [2024-10-08 18:31:50.051863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.051870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.051885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.051892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:01.455 [2024-10-08 18:31:50.051905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.051912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.061283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.061465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:01.455 [2024-10-08 18:31:50.061482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.061491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.068725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.068955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:01.455 [2024-10-08 18:31:50.068980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.068988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:01.455 [2024-10-08 18:31:50.069053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:01.455 [2024-10-08 18:31:50.069124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:01.455 [2024-10-08 18:31:50.069225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:01.455 [2024-10-08 18:31:50.069282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:01.455 [2024-10-08 18:31:50.069341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:01.455 [2024-10-08 18:31:50.069398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:01.455 [2024-10-08 18:31:50.069405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:01.455 [2024-10-08 18:31:50.069413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.455 [2024-10-08 18:31:50.069526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.517 ms, result 0 00:21:02.839 00:21:02.839 00:21:02.839 18:31:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:02.839 [2024-10-08 18:31:51.606438] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:21:02.839 [2024-10-08 18:31:51.606566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89145 ] 00:21:03.118 [2024-10-08 18:31:51.736964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:03.118 [2024-10-08 18:31:51.757866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.118 [2024-10-08 18:31:51.806249] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.118 [2024-10-08 18:31:51.907515] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:03.118 [2024-10-08 18:31:51.907586] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:03.381 [2024-10-08 18:31:52.066840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.381 [2024-10-08 18:31:52.066900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:03.381 [2024-10-08 18:31:52.066917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:03.381 [2024-10-08 18:31:52.066926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.381 [2024-10-08 18:31:52.066988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.381 [2024-10-08 18:31:52.066999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:03.381 [2024-10-08 18:31:52.067007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:03.381 [2024-10-08 18:31:52.067019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.381 [2024-10-08 18:31:52.067042] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:03.381 [2024-10-08 18:31:52.067327] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:03.381 [2024-10-08 18:31:52.067342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.381 [2024-10-08 18:31:52.067351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:03.381 [2024-10-08 18:31:52.067359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:21:03.382 [2024-10-08 18:31:52.067373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.068529] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:03.382 [2024-10-08 18:31:52.072006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.072055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:03.382 [2024-10-08 18:31:52.072068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.478 ms 00:21:03.382 [2024-10-08 18:31:52.072077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.072149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.072159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:03.382 [2024-10-08 18:31:52.072173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:03.382 [2024-10-08 18:31:52.072181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.077773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.077894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:03.382 [2024-10-08 18:31:52.077962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.542 ms 00:21:03.382 [2024-10-08 18:31:52.077990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.078080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.078104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:03.382 [2024-10-08 18:31:52.078125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:03.382 [2024-10-08 18:31:52.078143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.078244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.078278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:03.382 [2024-10-08 18:31:52.078298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:03.382 [2024-10-08 18:31:52.078320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.078359] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:03.382 [2024-10-08 18:31:52.079878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.079980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:03.382 [2024-10-08 18:31:52.080029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:21:03.382 [2024-10-08 18:31:52.080052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.080098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.080128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:03.382 [2024-10-08 18:31:52.080148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:03.382 [2024-10-08 18:31:52.080167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.080210] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:03.382 [2024-10-08 18:31:52.080242] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:03.382 [2024-10-08 18:31:52.080392] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:03.382 [2024-10-08 18:31:52.080409] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:03.382 [2024-10-08 18:31:52.080522] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:03.382 [2024-10-08 18:31:52.080533] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:03.382 [2024-10-08 18:31:52.080548] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:03.382 [2024-10-08 18:31:52.080565] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:03.382 [2024-10-08 18:31:52.080574] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:03.382 [2024-10-08 18:31:52.080587] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:03.382 [2024-10-08 18:31:52.080598] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:03.382 [2024-10-08 18:31:52.080609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:03.382 [2024-10-08 18:31:52.080623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:03.382 [2024-10-08 18:31:52.080634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.080645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:03.382 [2024-10-08 18:31:52.080658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:21:03.382 [2024-10-08 18:31:52.080666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.080795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.382 [2024-10-08 18:31:52.080822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:03.382 [2024-10-08 18:31:52.081000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:21:03.382 [2024-10-08 18:31:52.081030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.382 [2024-10-08 18:31:52.081174] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:03.382 [2024-10-08 18:31:52.081276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:03.382 [2024-10-08 18:31:52.081312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:03.382 [2024-10-08 18:31:52.081348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:03.382 [2024-10-08 18:31:52.081374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.382 [2024-10-08 18:31:52.081388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:03.382 [2024-10-08 18:31:52.081394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:03.382 [2024-10-08 18:31:52.081401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:03.382 [2024-10-08 18:31:52.081407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:03.382 [2024-10-08 18:31:52.081418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:03.382 [2024-10-08 18:31:52.081425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:03.382 [2024-10-08 18:31:52.081438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:03.382 [2024-10-08 18:31:52.081458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:03.382 [2024-10-08 18:31:52.081477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:03.382 [2024-10-08 18:31:52.081496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:03.382 [2024-10-08 18:31:52.081515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:03.382 [2024-10-08 18:31:52.081536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.382 [2024-10-08 18:31:52.081549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:03.382 [2024-10-08 18:31:52.081556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:03.382 [2024-10-08 18:31:52.081562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:03.382 [2024-10-08 18:31:52.081568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:03.382 [2024-10-08 18:31:52.081575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:03.382 [2024-10-08 18:31:52.081581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:03.382 [2024-10-08 18:31:52.081595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:03.382 [2024-10-08 18:31:52.081601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081608] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:03.382 [2024-10-08 18:31:52.081616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:03.382 [2024-10-08 18:31:52.081631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:03.382 [2024-10-08 18:31:52.081648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:03.382 [2024-10-08 18:31:52.081655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:03.382 [2024-10-08 18:31:52.081661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:03.382 [2024-10-08 18:31:52.081668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:03.382 [2024-10-08 18:31:52.081674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:03.382 [2024-10-08 18:31:52.081681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:03.382 [2024-10-08 18:31:52.081690] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:03.382 [2024-10-08 18:31:52.081700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.382 [2024-10-08 18:31:52.081708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:03.383 [2024-10-08 18:31:52.081715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:03.383 [2024-10-08 18:31:52.081722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:03.383 [2024-10-08 18:31:52.081729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:03.383 [2024-10-08 18:31:52.081736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:03.383 [2024-10-08 18:31:52.081743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:03.383 [2024-10-08 18:31:52.081762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:03.383 [2024-10-08 18:31:52.081772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:03.383 [2024-10-08 18:31:52.081779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:03.383 [2024-10-08 18:31:52.081786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:03.383 [2024-10-08 18:31:52.081822] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:03.383 [2024-10-08 18:31:52.081831] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:03.383 [2024-10-08 18:31:52.081850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:03.383 [2024-10-08 18:31:52.081857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:03.383 [2024-10-08 18:31:52.081865] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:03.383 [2024-10-08 18:31:52.081874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.081881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:03.383 [2024-10-08 18:31:52.081889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:21:03.383 [2024-10-08 18:31:52.081899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.101431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.101683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:03.383 [2024-10-08 18:31:52.101712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:21:03.383 [2024-10-08 18:31:52.101725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.101911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.101936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:03.383 [2024-10-08 18:31:52.101950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:21:03.383 [2024-10-08 18:31:52.101961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.111580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.111621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:03.383 [2024-10-08 18:31:52.111632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.516 ms 00:21:03.383 [2024-10-08 18:31:52.111640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.111680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.111688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:03.383 [2024-10-08 18:31:52.111697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:03.383 [2024-10-08 18:31:52.111704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.112086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.112103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:03.383 [2024-10-08 18:31:52.112112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.346 ms 00:21:03.383 [2024-10-08 18:31:52.112123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.112245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.112253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:03.383 [2024-10-08 18:31:52.112260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:03.383 [2024-10-08 18:31:52.112268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.116969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.117005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:03.383 [2024-10-08 18:31:52.117015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.662 ms 00:21:03.383 [2024-10-08 18:31:52.117023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.119863] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:03.383 [2024-10-08 18:31:52.119900] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:03.383 [2024-10-08 18:31:52.119913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.119921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:03.383 [2024-10-08 18:31:52.119934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.793 ms 00:21:03.383 [2024-10-08 18:31:52.119948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.135003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.135198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:03.383 [2024-10-08 18:31:52.135219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.013 ms 00:21:03.383 [2024-10-08 18:31:52.135236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.137963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.137999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:03.383 [2024-10-08 18:31:52.138009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:21:03.383 [2024-10-08 18:31:52.138017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.140031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.140062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:03.383 [2024-10-08 18:31:52.140071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.976 ms 00:21:03.383 [2024-10-08 18:31:52.140078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.140410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.140421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:03.383 [2024-10-08 18:31:52.140430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:21:03.383 [2024-10-08 18:31:52.140439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.158520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.158580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:03.383 [2024-10-08 18:31:52.158593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.064 ms 00:21:03.383 [2024-10-08 18:31:52.158600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.166185] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:03.383 [2024-10-08 18:31:52.168934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.169073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:03.383 [2024-10-08 18:31:52.169099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.285 ms 00:21:03.383 [2024-10-08 18:31:52.169107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.169185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.169196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:03.383 [2024-10-08 18:31:52.169204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:03.383 [2024-10-08 18:31:52.169214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.170560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.170599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:03.383 [2024-10-08 18:31:52.170608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.292 ms 00:21:03.383 [2024-10-08 18:31:52.170623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.170647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.170659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:03.383 [2024-10-08 18:31:52.170667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:03.383 [2024-10-08 18:31:52.170677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.170711] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:03.383 [2024-10-08 18:31:52.170721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.170728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:03.383 [2024-10-08 18:31:52.170736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:03.383 [2024-10-08 18:31:52.170745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.174643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.174676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:03.383 [2024-10-08 18:31:52.174685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.864 ms 00:21:03.383 [2024-10-08 18:31:52.174693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.174777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:03.383 [2024-10-08 18:31:52.174791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:03.383 [2024-10-08 18:31:52.174799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:03.383 [2024-10-08 18:31:52.174806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:03.383 [2024-10-08 18:31:52.175679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 108.455 ms, result 0 00:21:04.770  [2024-10-08T18:31:54.564Z] Copying: 16/1024 [MB] (16 MBps) [2024-10-08T18:31:55.566Z] Copying: 31/1024 [MB] (15 MBps) [2024-10-08T18:31:56.510Z] Copying: 47/1024 [MB] (15 MBps) [2024-10-08T18:31:57.450Z] Copying: 63/1024 [MB] (16 MBps) [2024-10-08T18:31:58.393Z] Copying: 87/1024 [MB] (24 MBps) [2024-10-08T18:31:59.779Z] Copying: 108/1024 [MB] (21 MBps) [2024-10-08T18:32:00.722Z] Copying: 142/1024 [MB] (33 MBps) [2024-10-08T18:32:01.666Z] Copying: 162/1024 [MB] (20 MBps) [2024-10-08T18:32:02.611Z] Copying: 183/1024 [MB] (20 MBps) [2024-10-08T18:32:03.556Z] Copying: 209/1024 [MB] (26 MBps) [2024-10-08T18:32:04.533Z] Copying: 233/1024 [MB] (23 MBps) [2024-10-08T18:32:05.476Z] Copying: 256/1024 [MB] (23 MBps) [2024-10-08T18:32:06.420Z] Copying: 283/1024 [MB] (27 MBps) [2024-10-08T18:32:07.363Z] Copying: 309/1024 [MB] (25 MBps) [2024-10-08T18:32:08.750Z] Copying: 334/1024 [MB] (25 MBps) [2024-10-08T18:32:09.692Z] Copying: 364/1024 [MB] (29 MBps) [2024-10-08T18:32:10.635Z] Copying: 385/1024 [MB] (21 MBps) [2024-10-08T18:32:11.581Z] Copying: 397/1024 [MB] (11 MBps) [2024-10-08T18:32:12.525Z] Copying: 409/1024 [MB] (11 MBps) [2024-10-08T18:32:13.468Z] Copying: 429/1024 [MB] (20 MBps) [2024-10-08T18:32:14.412Z] Copying: 455/1024 [MB] (25 MBps) [2024-10-08T18:32:15.796Z] Copying: 484/1024 [MB] (28 MBps) [2024-10-08T18:32:16.369Z] Copying: 505/1024 [MB] (21 MBps) [2024-10-08T18:32:17.757Z] Copying: 525/1024 [MB] (19 MBps) [2024-10-08T18:32:18.697Z] Copying: 551/1024 [MB] (25 MBps) [2024-10-08T18:32:19.637Z] Copying: 568/1024 [MB] (17 MBps) [2024-10-08T18:32:20.583Z] Copying: 585/1024 [MB] (16 MBps) [2024-10-08T18:32:21.525Z] Copying: 600/1024 [MB] (15 MBps) [2024-10-08T18:32:22.468Z] Copying: 623/1024 [MB] (22 MBps) [2024-10-08T18:32:23.408Z] Copying: 641/1024 [MB] (18 MBps) [2024-10-08T18:32:24.794Z] Copying: 658/1024 [MB] (16 MBps) [2024-10-08T18:32:25.365Z] Copying: 672/1024 [MB] (14 MBps) [2024-10-08T18:32:26.747Z] Copying: 685/1024 [MB] (13 MBps) [2024-10-08T18:32:27.690Z] Copying: 701/1024 [MB] (15 MBps) [2024-10-08T18:32:28.633Z] Copying: 712/1024 [MB] (11 MBps) [2024-10-08T18:32:29.577Z] Copying: 724/1024 [MB] (11 MBps) [2024-10-08T18:32:30.520Z] Copying: 734/1024 [MB] (10 MBps) [2024-10-08T18:32:31.464Z] Copying: 746/1024 [MB] (11 MBps) [2024-10-08T18:32:32.406Z] Copying: 756/1024 [MB] (10 MBps) [2024-10-08T18:32:33.792Z] Copying: 769/1024 [MB] (13 MBps) [2024-10-08T18:32:34.361Z] Copying: 781/1024 [MB] (11 MBps) [2024-10-08T18:32:35.745Z] Copying: 794/1024 [MB] (12 MBps) [2024-10-08T18:32:36.685Z] Copying: 811/1024 [MB] (16 MBps) [2024-10-08T18:32:37.626Z] Copying: 829/1024 [MB] (18 MBps) [2024-10-08T18:32:38.567Z] Copying: 847/1024 [MB] (18 MBps) [2024-10-08T18:32:39.510Z] Copying: 862/1024 [MB] (15 MBps) [2024-10-08T18:32:40.573Z] Copying: 882/1024 [MB] (19 MBps) [2024-10-08T18:32:41.532Z] Copying: 902/1024 [MB] (19 MBps) [2024-10-08T18:32:42.475Z] Copying: 919/1024 [MB] (16 MBps) [2024-10-08T18:32:43.465Z] Copying: 933/1024 [MB] (14 MBps) [2024-10-08T18:32:44.409Z] Copying: 951/1024 [MB] (17 MBps) [2024-10-08T18:32:45.363Z] Copying: 971/1024 [MB] (19 MBps) [2024-10-08T18:32:46.747Z] Copying: 992/1024 [MB] (21 MBps) [2024-10-08T18:32:47.317Z] Copying: 1006/1024 [MB] (13 MBps) [2024-10-08T18:32:47.578Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-10-08 18:32:47.423250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.423316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:58.728 [2024-10-08 18:32:47.423329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:58.728 [2024-10-08 18:32:47.423338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.423365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:58.728 [2024-10-08 18:32:47.423837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.423863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:58.728 [2024-10-08 18:32:47.423881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.450 ms 00:21:58.728 [2024-10-08 18:32:47.423889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.424115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.424126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:58.728 [2024-10-08 18:32:47.424136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:21:58.728 [2024-10-08 18:32:47.424145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.430181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.430330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:58.728 [2024-10-08 18:32:47.430349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.021 ms 00:21:58.728 [2024-10-08 18:32:47.430358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.436764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.436868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:58.728 [2024-10-08 18:32:47.436936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.363 ms 00:21:58.728 [2024-10-08 18:32:47.436987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.438972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.439079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:58.728 [2024-10-08 18:32:47.439134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.905 ms 00:21:58.728 [2024-10-08 18:32:47.439158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.728 [2024-10-08 18:32:47.443080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.728 [2024-10-08 18:32:47.443188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:58.728 [2024-10-08 18:32:47.443243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.882 ms 00:21:58.728 [2024-10-08 18:32:47.443265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.684725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.991 [2024-10-08 18:32:47.684970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:58.991 [2024-10-08 18:32:47.685095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 241.339 ms 00:21:58.991 [2024-10-08 18:32:47.685142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.687823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.991 [2024-10-08 18:32:47.687984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:58.991 [2024-10-08 18:32:47.688074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.622 ms 00:21:58.991 [2024-10-08 18:32:47.688156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.689959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.991 [2024-10-08 18:32:47.690112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:58.991 [2024-10-08 18:32:47.690197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.737 ms 00:21:58.991 [2024-10-08 18:32:47.690238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.691830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.991 [2024-10-08 18:32:47.691972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:58.991 [2024-10-08 18:32:47.692076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:21:58.991 [2024-10-08 18:32:47.692196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.693896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.991 [2024-10-08 18:32:47.694039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:58.991 [2024-10-08 18:32:47.694128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.590 ms 00:21:58.991 [2024-10-08 18:32:47.694171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.991 [2024-10-08 18:32:47.694264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:58.991 [2024-10-08 18:32:47.694315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:21:58.991 [2024-10-08 18:32:47.694520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.694617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.694700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.694821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.694926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.695975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:58.991 [2024-10-08 18:32:47.696674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.696997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:58.992 [2024-10-08 18:32:47.697686] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:58.992 [2024-10-08 18:32:47.697700] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cb48738f-b06a-4f62-98b8-1a25e482c773 00:21:58.992 [2024-10-08 18:32:47.697723] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:21:58.992 [2024-10-08 18:32:47.697737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 35008 00:21:58.992 [2024-10-08 18:32:47.698026] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 34048 00:21:58.992 [2024-10-08 18:32:47.698099] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0282 00:21:58.992 [2024-10-08 18:32:47.698198] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:58.992 [2024-10-08 18:32:47.698239] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:58.992 [2024-10-08 18:32:47.698273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:58.992 [2024-10-08 18:32:47.698345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:58.992 [2024-10-08 18:32:47.698378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:58.992 [2024-10-08 18:32:47.698513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.992 [2024-10-08 18:32:47.698557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:58.992 [2024-10-08 18:32:47.698655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:21:58.992 [2024-10-08 18:32:47.698696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.992 [2024-10-08 18:32:47.700321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.992 [2024-10-08 18:32:47.700480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:58.992 [2024-10-08 18:32:47.700570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:21:58.992 [2024-10-08 18:32:47.700610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.992 [2024-10-08 18:32:47.700810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:58.992 [2024-10-08 18:32:47.700925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:58.992 [2024-10-08 18:32:47.701079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:21:58.992 [2024-10-08 18:32:47.701130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.992 [2024-10-08 18:32:47.706281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.992 [2024-10-08 18:32:47.706453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:58.992 [2024-10-08 18:32:47.706490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.992 [2024-10-08 18:32:47.706505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.992 [2024-10-08 18:32:47.706583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.992 [2024-10-08 18:32:47.706606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:58.993 [2024-10-08 18:32:47.706621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.706639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.706698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.706720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:58.993 [2024-10-08 18:32:47.706734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.706774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.706805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.706819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:58.993 [2024-10-08 18:32:47.706834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.706848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.716729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.716832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:58.993 [2024-10-08 18:32:47.716851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.716864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.724917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.724994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:58.993 [2024-10-08 18:32:47.725013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:58.993 [2024-10-08 18:32:47.725153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:58.993 [2024-10-08 18:32:47.725259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:58.993 [2024-10-08 18:32:47.725394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:58.993 [2024-10-08 18:32:47.725477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:58.993 [2024-10-08 18:32:47.725571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:58.993 [2024-10-08 18:32:47.725673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:58.993 [2024-10-08 18:32:47.725687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:58.993 [2024-10-08 18:32:47.725699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:58.993 [2024-10-08 18:32:47.725893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 302.595 ms, result 0 00:21:59.256 00:21:59.256 00:21:59.256 18:32:47 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:01.809 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:01.809 Process with pid 87236 is not found 00:22:01.809 Remove shared memory files 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 87236 00:22:01.809 18:32:50 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 87236 ']' 00:22:01.809 18:32:50 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 87236 00:22:01.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (87236) - No such process 00:22:01.809 18:32:50 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 87236 is not found' 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:01.809 18:32:50 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:01.810 18:32:50 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:01.810 18:32:50 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:01.810 18:32:50 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:01.810 18:32:50 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:01.810 ************************************ 00:22:01.810 END TEST ftl_restore 00:22:01.810 ************************************ 00:22:01.810 00:22:01.810 real 4m2.506s 00:22:01.810 user 3m51.258s 00:22:01.810 sys 0m11.902s 00:22:01.810 18:32:50 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:01.810 18:32:50 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:01.810 18:32:50 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:01.810 18:32:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:22:01.810 18:32:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:01.810 18:32:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:01.810 ************************************ 00:22:01.810 START TEST ftl_dirty_shutdown 00:22:01.810 ************************************ 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:01.810 * Looking for test storage... 00:22:01.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.810 --rc genhtml_branch_coverage=1 00:22:01.810 --rc genhtml_function_coverage=1 00:22:01.810 --rc genhtml_legend=1 00:22:01.810 --rc geninfo_all_blocks=1 00:22:01.810 --rc geninfo_unexecuted_blocks=1 00:22:01.810 00:22:01.810 ' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.810 --rc genhtml_branch_coverage=1 00:22:01.810 --rc genhtml_function_coverage=1 00:22:01.810 --rc genhtml_legend=1 00:22:01.810 --rc geninfo_all_blocks=1 00:22:01.810 --rc geninfo_unexecuted_blocks=1 00:22:01.810 00:22:01.810 ' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.810 --rc genhtml_branch_coverage=1 00:22:01.810 --rc genhtml_function_coverage=1 00:22:01.810 --rc genhtml_legend=1 00:22:01.810 --rc geninfo_all_blocks=1 00:22:01.810 --rc geninfo_unexecuted_blocks=1 00:22:01.810 00:22:01.810 ' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:01.810 --rc genhtml_branch_coverage=1 00:22:01.810 --rc genhtml_function_coverage=1 00:22:01.810 --rc genhtml_legend=1 00:22:01.810 --rc geninfo_all_blocks=1 00:22:01.810 --rc geninfo_unexecuted_blocks=1 00:22:01.810 00:22:01.810 ' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=89813 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 89813 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89813 ']' 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.810 18:32:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:01.810 [2024-10-08 18:32:50.566428] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:01.810 [2024-10-08 18:32:50.566734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89813 ] 00:22:02.136 [2024-10-08 18:32:50.698287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:02.136 [2024-10-08 18:32:50.718191] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.136 [2024-10-08 18:32:50.752276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:02.706 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:02.967 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:03.227 { 00:22:03.227 "name": "nvme0n1", 00:22:03.227 "aliases": [ 00:22:03.227 "b3560f0e-a3d7-437b-9dd2-0a80cd2c69e5" 00:22:03.227 ], 00:22:03.227 "product_name": "NVMe disk", 00:22:03.227 "block_size": 4096, 00:22:03.227 "num_blocks": 1310720, 00:22:03.227 "uuid": "b3560f0e-a3d7-437b-9dd2-0a80cd2c69e5", 00:22:03.227 "numa_id": -1, 00:22:03.227 "assigned_rate_limits": { 00:22:03.227 "rw_ios_per_sec": 0, 00:22:03.227 "rw_mbytes_per_sec": 0, 00:22:03.227 "r_mbytes_per_sec": 0, 00:22:03.227 "w_mbytes_per_sec": 0 00:22:03.227 }, 00:22:03.227 "claimed": true, 00:22:03.227 "claim_type": "read_many_write_one", 00:22:03.227 "zoned": false, 00:22:03.227 "supported_io_types": { 00:22:03.227 "read": true, 00:22:03.227 "write": true, 00:22:03.227 "unmap": true, 00:22:03.227 "flush": true, 00:22:03.227 "reset": true, 00:22:03.227 "nvme_admin": true, 00:22:03.227 "nvme_io": true, 00:22:03.227 "nvme_io_md": false, 00:22:03.227 "write_zeroes": true, 00:22:03.227 "zcopy": false, 00:22:03.227 "get_zone_info": false, 00:22:03.227 "zone_management": false, 00:22:03.227 "zone_append": false, 00:22:03.227 "compare": true, 00:22:03.227 "compare_and_write": false, 00:22:03.227 "abort": true, 00:22:03.227 "seek_hole": false, 00:22:03.227 "seek_data": false, 00:22:03.227 "copy": true, 00:22:03.227 "nvme_iov_md": false 00:22:03.227 }, 00:22:03.227 "driver_specific": { 00:22:03.227 "nvme": [ 00:22:03.227 { 00:22:03.227 "pci_address": "0000:00:11.0", 00:22:03.227 "trid": { 00:22:03.227 "trtype": "PCIe", 00:22:03.227 "traddr": "0000:00:11.0" 00:22:03.227 }, 00:22:03.227 "ctrlr_data": { 00:22:03.227 "cntlid": 0, 00:22:03.227 "vendor_id": "0x1b36", 00:22:03.227 "model_number": "QEMU NVMe Ctrl", 00:22:03.227 "serial_number": "12341", 00:22:03.227 "firmware_revision": "8.0.0", 00:22:03.227 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:03.227 "oacs": { 00:22:03.227 "security": 0, 00:22:03.227 "format": 1, 00:22:03.227 "firmware": 0, 00:22:03.227 "ns_manage": 1 00:22:03.227 }, 00:22:03.227 "multi_ctrlr": false, 00:22:03.227 "ana_reporting": false 00:22:03.227 }, 00:22:03.227 "vs": { 00:22:03.227 "nvme_version": "1.4" 00:22:03.227 }, 00:22:03.227 "ns_data": { 00:22:03.227 "id": 1, 00:22:03.227 "can_share": false 00:22:03.227 } 00:22:03.227 } 00:22:03.227 ], 00:22:03.227 "mp_policy": "active_passive" 00:22:03.227 } 00:22:03.227 } 00:22:03.227 ]' 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:03.227 18:32:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:03.487 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=faf4d8ce-be28-4090-9bec-9e09ba74de44 00:22:03.487 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:03.487 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u faf4d8ce-be28-4090-9bec-9e09ba74de44 00:22:03.747 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:04.008 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=08a96236-a329-4295-b9ef-287915ce96d5 00:22:04.008 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 08a96236-a329-4295-b9ef-287915ce96d5 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:04.268 18:32:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:04.581 { 00:22:04.581 "name": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:04.581 "aliases": [ 00:22:04.581 "lvs/nvme0n1p0" 00:22:04.581 ], 00:22:04.581 "product_name": "Logical Volume", 00:22:04.581 "block_size": 4096, 00:22:04.581 "num_blocks": 26476544, 00:22:04.581 "uuid": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:04.581 "assigned_rate_limits": { 00:22:04.581 "rw_ios_per_sec": 0, 00:22:04.581 "rw_mbytes_per_sec": 0, 00:22:04.581 "r_mbytes_per_sec": 0, 00:22:04.581 "w_mbytes_per_sec": 0 00:22:04.581 }, 00:22:04.581 "claimed": false, 00:22:04.581 "zoned": false, 00:22:04.581 "supported_io_types": { 00:22:04.581 "read": true, 00:22:04.581 "write": true, 00:22:04.581 "unmap": true, 00:22:04.581 "flush": false, 00:22:04.581 "reset": true, 00:22:04.581 "nvme_admin": false, 00:22:04.581 "nvme_io": false, 00:22:04.581 "nvme_io_md": false, 00:22:04.581 "write_zeroes": true, 00:22:04.581 "zcopy": false, 00:22:04.581 "get_zone_info": false, 00:22:04.581 "zone_management": false, 00:22:04.581 "zone_append": false, 00:22:04.581 "compare": false, 00:22:04.581 "compare_and_write": false, 00:22:04.581 "abort": false, 00:22:04.581 "seek_hole": true, 00:22:04.581 "seek_data": true, 00:22:04.581 "copy": false, 00:22:04.581 "nvme_iov_md": false 00:22:04.581 }, 00:22:04.581 "driver_specific": { 00:22:04.581 "lvol": { 00:22:04.581 "lvol_store_uuid": "08a96236-a329-4295-b9ef-287915ce96d5", 00:22:04.581 "base_bdev": "nvme0n1", 00:22:04.581 "thin_provision": true, 00:22:04.581 "num_allocated_clusters": 0, 00:22:04.581 "snapshot": false, 00:22:04.581 "clone": false, 00:22:04.581 "esnap_clone": false 00:22:04.581 } 00:22:04.581 } 00:22:04.581 } 00:22:04.581 ]' 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:04.581 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:04.842 { 00:22:04.842 "name": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:04.842 "aliases": [ 00:22:04.842 "lvs/nvme0n1p0" 00:22:04.842 ], 00:22:04.842 "product_name": "Logical Volume", 00:22:04.842 "block_size": 4096, 00:22:04.842 "num_blocks": 26476544, 00:22:04.842 "uuid": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:04.842 "assigned_rate_limits": { 00:22:04.842 "rw_ios_per_sec": 0, 00:22:04.842 "rw_mbytes_per_sec": 0, 00:22:04.842 "r_mbytes_per_sec": 0, 00:22:04.842 "w_mbytes_per_sec": 0 00:22:04.842 }, 00:22:04.842 "claimed": false, 00:22:04.842 "zoned": false, 00:22:04.842 "supported_io_types": { 00:22:04.842 "read": true, 00:22:04.842 "write": true, 00:22:04.842 "unmap": true, 00:22:04.842 "flush": false, 00:22:04.842 "reset": true, 00:22:04.842 "nvme_admin": false, 00:22:04.842 "nvme_io": false, 00:22:04.842 "nvme_io_md": false, 00:22:04.842 "write_zeroes": true, 00:22:04.842 "zcopy": false, 00:22:04.842 "get_zone_info": false, 00:22:04.842 "zone_management": false, 00:22:04.842 "zone_append": false, 00:22:04.842 "compare": false, 00:22:04.842 "compare_and_write": false, 00:22:04.842 "abort": false, 00:22:04.842 "seek_hole": true, 00:22:04.842 "seek_data": true, 00:22:04.842 "copy": false, 00:22:04.842 "nvme_iov_md": false 00:22:04.842 }, 00:22:04.842 "driver_specific": { 00:22:04.842 "lvol": { 00:22:04.842 "lvol_store_uuid": "08a96236-a329-4295-b9ef-287915ce96d5", 00:22:04.842 "base_bdev": "nvme0n1", 00:22:04.842 "thin_provision": true, 00:22:04.842 "num_allocated_clusters": 0, 00:22:04.842 "snapshot": false, 00:22:04.842 "clone": false, 00:22:04.842 "esnap_clone": false 00:22:04.842 } 00:22:04.842 } 00:22:04.842 } 00:22:04.842 ]' 00:22:04.842 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=031f1e18-8ae7-46d9-bc67-67230365c918 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:05.103 18:32:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 031f1e18-8ae7-46d9-bc67-67230365c918 00:22:05.363 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:05.363 { 00:22:05.363 "name": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:05.363 "aliases": [ 00:22:05.363 "lvs/nvme0n1p0" 00:22:05.363 ], 00:22:05.363 "product_name": "Logical Volume", 00:22:05.363 "block_size": 4096, 00:22:05.363 "num_blocks": 26476544, 00:22:05.363 "uuid": "031f1e18-8ae7-46d9-bc67-67230365c918", 00:22:05.363 "assigned_rate_limits": { 00:22:05.363 "rw_ios_per_sec": 0, 00:22:05.363 "rw_mbytes_per_sec": 0, 00:22:05.363 "r_mbytes_per_sec": 0, 00:22:05.363 "w_mbytes_per_sec": 0 00:22:05.363 }, 00:22:05.363 "claimed": false, 00:22:05.363 "zoned": false, 00:22:05.363 "supported_io_types": { 00:22:05.363 "read": true, 00:22:05.363 "write": true, 00:22:05.363 "unmap": true, 00:22:05.363 "flush": false, 00:22:05.363 "reset": true, 00:22:05.363 "nvme_admin": false, 00:22:05.363 "nvme_io": false, 00:22:05.363 "nvme_io_md": false, 00:22:05.363 "write_zeroes": true, 00:22:05.363 "zcopy": false, 00:22:05.363 "get_zone_info": false, 00:22:05.363 "zone_management": false, 00:22:05.363 "zone_append": false, 00:22:05.363 "compare": false, 00:22:05.363 "compare_and_write": false, 00:22:05.363 "abort": false, 00:22:05.363 "seek_hole": true, 00:22:05.363 "seek_data": true, 00:22:05.363 "copy": false, 00:22:05.363 "nvme_iov_md": false 00:22:05.363 }, 00:22:05.363 "driver_specific": { 00:22:05.363 "lvol": { 00:22:05.363 "lvol_store_uuid": "08a96236-a329-4295-b9ef-287915ce96d5", 00:22:05.363 "base_bdev": "nvme0n1", 00:22:05.363 "thin_provision": true, 00:22:05.363 "num_allocated_clusters": 0, 00:22:05.363 "snapshot": false, 00:22:05.363 "clone": false, 00:22:05.363 "esnap_clone": false 00:22:05.363 } 00:22:05.363 } 00:22:05.363 } 00:22:05.363 ]' 00:22:05.363 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:05.363 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:05.363 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 031f1e18-8ae7-46d9-bc67-67230365c918 --l2p_dram_limit 10' 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:05.625 18:32:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 031f1e18-8ae7-46d9-bc67-67230365c918 --l2p_dram_limit 10 -c nvc0n1p0 00:22:05.625 [2024-10-08 18:32:54.438302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.438361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:05.625 [2024-10-08 18:32:54.438378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.625 [2024-10-08 18:32:54.438386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.438450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.438461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.625 [2024-10-08 18:32:54.438477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:05.625 [2024-10-08 18:32:54.438487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.438514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:05.625 [2024-10-08 18:32:54.440225] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:05.625 [2024-10-08 18:32:54.440334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.440359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.625 [2024-10-08 18:32:54.440383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.831 ms 00:22:05.625 [2024-10-08 18:32:54.440402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.440548] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 586781a0-ef3b-4b7a-b872-155f1a622c6f 00:22:05.625 [2024-10-08 18:32:54.441776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.441887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:05.625 [2024-10-08 18:32:54.441945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:05.625 [2024-10-08 18:32:54.441995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.447415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.447527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.625 [2024-10-08 18:32:54.447581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.332 ms 00:22:05.625 [2024-10-08 18:32:54.447628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.447733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.447813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.625 [2024-10-08 18:32:54.447840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:05.625 [2024-10-08 18:32:54.447885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.447966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.447997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:05.625 [2024-10-08 18:32:54.448045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:05.625 [2024-10-08 18:32:54.448074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.448109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:05.625 [2024-10-08 18:32:54.449666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.449778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.625 [2024-10-08 18:32:54.449838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.561 ms 00:22:05.625 [2024-10-08 18:32:54.449861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.449974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.450000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:05.625 [2024-10-08 18:32:54.450024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:05.625 [2024-10-08 18:32:54.450079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.450114] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:05.625 [2024-10-08 18:32:54.450261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:05.625 [2024-10-08 18:32:54.450299] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:05.625 [2024-10-08 18:32:54.450363] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:05.625 [2024-10-08 18:32:54.450397] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:05.625 [2024-10-08 18:32:54.450454] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:05.625 [2024-10-08 18:32:54.450521] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:05.625 [2024-10-08 18:32:54.450545] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:05.625 [2024-10-08 18:32:54.450590] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:05.625 [2024-10-08 18:32:54.450615] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:05.625 [2024-10-08 18:32:54.450702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.450734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:05.625 [2024-10-08 18:32:54.450768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:22:05.625 [2024-10-08 18:32:54.450811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.450919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.625 [2024-10-08 18:32:54.450946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:05.625 [2024-10-08 18:32:54.450990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:05.625 [2024-10-08 18:32:54.451013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.625 [2024-10-08 18:32:54.451125] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:05.625 [2024-10-08 18:32:54.451149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:05.625 [2024-10-08 18:32:54.451171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.625 [2024-10-08 18:32:54.451219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.625 [2024-10-08 18:32:54.451244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:05.625 [2024-10-08 18:32:54.451262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:05.625 [2024-10-08 18:32:54.451314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:05.625 [2024-10-08 18:32:54.451354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:05.625 [2024-10-08 18:32:54.451380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:05.625 [2024-10-08 18:32:54.451400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.626 [2024-10-08 18:32:54.451447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:05.626 [2024-10-08 18:32:54.451455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:05.626 [2024-10-08 18:32:54.451466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:05.626 [2024-10-08 18:32:54.451473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:05.626 [2024-10-08 18:32:54.451482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:05.626 [2024-10-08 18:32:54.451489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:05.626 [2024-10-08 18:32:54.451504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:05.626 [2024-10-08 18:32:54.451529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:05.626 [2024-10-08 18:32:54.451550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:05.626 [2024-10-08 18:32:54.451573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:05.626 [2024-10-08 18:32:54.451598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:05.626 [2024-10-08 18:32:54.451620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.626 [2024-10-08 18:32:54.451637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:05.626 [2024-10-08 18:32:54.451645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:05.626 [2024-10-08 18:32:54.451654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:05.626 [2024-10-08 18:32:54.451661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:05.626 [2024-10-08 18:32:54.451669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:05.626 [2024-10-08 18:32:54.451676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:05.626 [2024-10-08 18:32:54.451690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:05.626 [2024-10-08 18:32:54.451698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451704] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:05.626 [2024-10-08 18:32:54.451718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:05.626 [2024-10-08 18:32:54.451726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:05.626 [2024-10-08 18:32:54.451743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:05.626 [2024-10-08 18:32:54.451849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:05.626 [2024-10-08 18:32:54.451876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:05.626 [2024-10-08 18:32:54.451897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:05.626 [2024-10-08 18:32:54.451915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:05.626 [2024-10-08 18:32:54.451939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:05.626 [2024-10-08 18:32:54.451966] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:05.626 [2024-10-08 18:32:54.452049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:05.626 [2024-10-08 18:32:54.452117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:05.626 [2024-10-08 18:32:54.452145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:05.626 [2024-10-08 18:32:54.452258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:05.626 [2024-10-08 18:32:54.452267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:05.626 [2024-10-08 18:32:54.452277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:05.626 [2024-10-08 18:32:54.452284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:05.626 [2024-10-08 18:32:54.452293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:05.626 [2024-10-08 18:32:54.452300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:05.626 [2024-10-08 18:32:54.452309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:05.626 [2024-10-08 18:32:54.452349] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:05.626 [2024-10-08 18:32:54.452361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:05.626 [2024-10-08 18:32:54.452381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:05.626 [2024-10-08 18:32:54.452388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:05.626 [2024-10-08 18:32:54.452397] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:05.626 [2024-10-08 18:32:54.452405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.626 [2024-10-08 18:32:54.452416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:05.626 [2024-10-08 18:32:54.452424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:22:05.626 [2024-10-08 18:32:54.452432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.626 [2024-10-08 18:32:54.452494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:05.626 [2024-10-08 18:32:54.452507] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:08.951 [2024-10-08 18:32:57.258994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.259049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:08.951 [2024-10-08 18:32:57.259066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2806.489 ms 00:22:08.951 [2024-10-08 18:32:57.259076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.267625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.267668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.951 [2024-10-08 18:32:57.267682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.473 ms 00:22:08.951 [2024-10-08 18:32:57.267694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.267821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.267834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.951 [2024-10-08 18:32:57.267846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:08.951 [2024-10-08 18:32:57.267856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.275774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.275812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.951 [2024-10-08 18:32:57.275822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.879 ms 00:22:08.951 [2024-10-08 18:32:57.275838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.275866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.275878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.951 [2024-10-08 18:32:57.275886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.951 [2024-10-08 18:32:57.275895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.276223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.276241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.951 [2024-10-08 18:32:57.276249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:22:08.951 [2024-10-08 18:32:57.276260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.276364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.276380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.951 [2024-10-08 18:32:57.276390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:22:08.951 [2024-10-08 18:32:57.276400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.289550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.289604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.951 [2024-10-08 18:32:57.289621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.128 ms 00:22:08.951 [2024-10-08 18:32:57.289635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.299999] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:08.951 [2024-10-08 18:32:57.302700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.302729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.951 [2024-10-08 18:32:57.302743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.893 ms 00:22:08.951 [2024-10-08 18:32:57.302766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.368342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.368390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:08.951 [2024-10-08 18:32:57.368412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.544 ms 00:22:08.951 [2024-10-08 18:32:57.368422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.368602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.368613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.951 [2024-10-08 18:32:57.368623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:22:08.951 [2024-10-08 18:32:57.368630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.372734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.372780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:08.951 [2024-10-08 18:32:57.372794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.082 ms 00:22:08.951 [2024-10-08 18:32:57.372803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.376406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.376437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:08.951 [2024-10-08 18:32:57.376450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.562 ms 00:22:08.951 [2024-10-08 18:32:57.376458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.376777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.376788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.951 [2024-10-08 18:32:57.376800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:22:08.951 [2024-10-08 18:32:57.376807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.407097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.407139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:08.951 [2024-10-08 18:32:57.407152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.266 ms 00:22:08.951 [2024-10-08 18:32:57.407160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.411940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.411974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:08.951 [2024-10-08 18:32:57.411991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.740 ms 00:22:08.951 [2024-10-08 18:32:57.411999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.416000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.416032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:08.951 [2024-10-08 18:32:57.416045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.976 ms 00:22:08.951 [2024-10-08 18:32:57.416053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.420417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.420450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.951 [2024-10-08 18:32:57.420465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.340 ms 00:22:08.951 [2024-10-08 18:32:57.420473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.420499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.420512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.951 [2024-10-08 18:32:57.420526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:08.951 [2024-10-08 18:32:57.420534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.420603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.951 [2024-10-08 18:32:57.420613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.951 [2024-10-08 18:32:57.420623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:08.951 [2024-10-08 18:32:57.420632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.951 [2024-10-08 18:32:57.421463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2982.786 ms, result 0 00:22:08.951 { 00:22:08.951 "name": "ftl0", 00:22:08.951 "uuid": "586781a0-ef3b-4b7a-b872-155f1a622c6f" 00:22:08.951 } 00:22:08.951 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:08.951 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:08.951 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:08.951 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:08.952 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:09.210 /dev/nbd0 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:09.210 1+0 records in 00:22:09.210 1+0 records out 00:22:09.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407141 s, 10.1 MB/s 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:22:09.210 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:09.211 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:22:09.211 18:32:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:22:09.211 18:32:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:09.211 [2024-10-08 18:32:57.945041] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:09.211 [2024-10-08 18:32:57.945162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89951 ] 00:22:09.469 [2024-10-08 18:32:58.073336] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:09.469 [2024-10-08 18:32:58.091875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.469 [2024-10-08 18:32:58.126554] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.419  [2024-10-08T18:33:00.221Z] Copying: 191/1024 [MB] (191 MBps) [2024-10-08T18:33:01.607Z] Copying: 386/1024 [MB] (195 MBps) [2024-10-08T18:33:02.553Z] Copying: 582/1024 [MB] (195 MBps) [2024-10-08T18:33:03.526Z] Copying: 777/1024 [MB] (195 MBps) [2024-10-08T18:33:03.526Z] Copying: 967/1024 [MB] (190 MBps) [2024-10-08T18:33:03.788Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:22:14.938 00:22:14.938 18:33:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:17.482 18:33:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:17.482 [2024-10-08 18:33:05.893474] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:22:17.482 [2024-10-08 18:33:05.893827] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90032 ] 00:22:17.482 [2024-10-08 18:33:06.022491] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.482 [2024-10-08 18:33:06.041486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.482 [2024-10-08 18:33:06.077673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.424  [2024-10-08T18:33:08.217Z] Copying: 7608/1048576 [kB] (7608 kBps) [2024-10-08T18:33:09.163Z] Copying: 15980/1048576 [kB] (8372 kBps) [2024-10-08T18:33:10.555Z] Copying: 18868/1048576 [kB] (2888 kBps) [2024-10-08T18:33:11.501Z] Copying: 29/1024 [MB] (10 MBps) [2024-10-08T18:33:12.447Z] Copying: 40/1024 [MB] (11 MBps) [2024-10-08T18:33:13.387Z] Copying: 53/1024 [MB] (12 MBps) [2024-10-08T18:33:14.331Z] Copying: 65/1024 [MB] (12 MBps) [2024-10-08T18:33:15.272Z] Copying: 75/1024 [MB] (10 MBps) [2024-10-08T18:33:16.264Z] Copying: 88000/1048576 [kB] (10184 kBps) [2024-10-08T18:33:17.209Z] Copying: 96/1024 [MB] (10 MBps) [2024-10-08T18:33:18.153Z] Copying: 107/1024 [MB] (10 MBps) [2024-10-08T18:33:19.536Z] Copying: 118/1024 [MB] (11 MBps) [2024-10-08T18:33:20.480Z] Copying: 131184/1048576 [kB] (9952 kBps) [2024-10-08T18:33:21.423Z] Copying: 138/1024 [MB] (10 MBps) [2024-10-08T18:33:22.364Z] Copying: 148/1024 [MB] (10 MBps) [2024-10-08T18:33:23.306Z] Copying: 161516/1048576 [kB] (9320 kBps) [2024-10-08T18:33:24.249Z] Copying: 171/1024 [MB] (13 MBps) [2024-10-08T18:33:25.190Z] Copying: 184/1024 [MB] (12 MBps) [2024-10-08T18:33:26.155Z] Copying: 196/1024 [MB] (12 MBps) [2024-10-08T18:33:27.546Z] Copying: 209/1024 [MB] (13 MBps) [2024-10-08T18:33:28.494Z] Copying: 220/1024 [MB] (10 MBps) [2024-10-08T18:33:29.438Z] Copying: 232/1024 [MB] (11 MBps) [2024-10-08T18:33:30.381Z] Copying: 247200/1048576 [kB] (9544 kBps) [2024-10-08T18:33:31.323Z] Copying: 256680/1048576 [kB] (9480 kBps) [2024-10-08T18:33:32.268Z] Copying: 266588/1048576 [kB] (9908 kBps) [2024-10-08T18:33:33.212Z] Copying: 276232/1048576 [kB] (9644 kBps) [2024-10-08T18:33:34.161Z] Copying: 285992/1048576 [kB] (9760 kBps) [2024-10-08T18:33:35.554Z] Copying: 295396/1048576 [kB] (9404 kBps) [2024-10-08T18:33:36.499Z] Copying: 299/1024 [MB] (10 MBps) [2024-10-08T18:33:37.491Z] Copying: 312/1024 [MB] (13 MBps) [2024-10-08T18:33:38.436Z] Copying: 322/1024 [MB] (10 MBps) [2024-10-08T18:33:39.378Z] Copying: 333/1024 [MB] (10 MBps) [2024-10-08T18:33:40.325Z] Copying: 344/1024 [MB] (10 MBps) [2024-10-08T18:33:41.266Z] Copying: 355/1024 [MB] (10 MBps) [2024-10-08T18:33:42.235Z] Copying: 365/1024 [MB] (10 MBps) [2024-10-08T18:33:43.176Z] Copying: 384616/1048576 [kB] (9848 kBps) [2024-10-08T18:33:44.558Z] Copying: 386/1024 [MB] (10 MBps) [2024-10-08T18:33:45.501Z] Copying: 405480/1048576 [kB] (10076 kBps) [2024-10-08T18:33:46.441Z] Copying: 407/1024 [MB] (11 MBps) [2024-10-08T18:33:47.382Z] Copying: 421/1024 [MB] (13 MBps) [2024-10-08T18:33:48.321Z] Copying: 441932/1048576 [kB] (10232 kBps) [2024-10-08T18:33:49.262Z] Copying: 442/1024 [MB] (10 MBps) [2024-10-08T18:33:50.203Z] Copying: 453/1024 [MB] (11 MBps) [2024-10-08T18:33:51.143Z] Copying: 464/1024 [MB] (10 MBps) [2024-10-08T18:33:52.528Z] Copying: 485036/1048576 [kB] (9724 kBps) [2024-10-08T18:33:53.468Z] Copying: 494876/1048576 [kB] (9840 kBps) [2024-10-08T18:33:54.409Z] Copying: 494/1024 [MB] (11 MBps) [2024-10-08T18:33:55.359Z] Copying: 506/1024 [MB] (11 MBps) [2024-10-08T18:33:56.300Z] Copying: 519/1024 [MB] (13 MBps) [2024-10-08T18:33:57.242Z] Copying: 530/1024 [MB] (10 MBps) [2024-10-08T18:33:58.184Z] Copying: 541/1024 [MB] (11 MBps) [2024-10-08T18:33:59.568Z] Copying: 564764/1048576 [kB] (9940 kBps) [2024-10-08T18:34:00.139Z] Copying: 564/1024 [MB] (12 MBps) [2024-10-08T18:34:01.525Z] Copying: 577/1024 [MB] (13 MBps) [2024-10-08T18:34:02.464Z] Copying: 587/1024 [MB] (10 MBps) [2024-10-08T18:34:03.451Z] Copying: 599/1024 [MB] (11 MBps) [2024-10-08T18:34:04.388Z] Copying: 610/1024 [MB] (11 MBps) [2024-10-08T18:34:05.327Z] Copying: 620/1024 [MB] (10 MBps) [2024-10-08T18:34:06.270Z] Copying: 630/1024 [MB] (10 MBps) [2024-10-08T18:34:07.212Z] Copying: 640/1024 [MB] (10 MBps) [2024-10-08T18:34:08.181Z] Copying: 666176/1048576 [kB] (10108 kBps) [2024-10-08T18:34:09.134Z] Copying: 660/1024 [MB] (10 MBps) [2024-10-08T18:34:10.521Z] Copying: 671/1024 [MB] (10 MBps) [2024-10-08T18:34:11.461Z] Copying: 682/1024 [MB] (11 MBps) [2024-10-08T18:34:12.439Z] Copying: 709340/1048576 [kB] (10232 kBps) [2024-10-08T18:34:13.378Z] Copying: 719252/1048576 [kB] (9912 kBps) [2024-10-08T18:34:14.318Z] Copying: 713/1024 [MB] (11 MBps) [2024-10-08T18:34:15.259Z] Copying: 729/1024 [MB] (16 MBps) [2024-10-08T18:34:16.200Z] Copying: 740/1024 [MB] (10 MBps) [2024-10-08T18:34:17.142Z] Copying: 768436/1048576 [kB] (9972 kBps) [2024-10-08T18:34:18.535Z] Copying: 777996/1048576 [kB] (9560 kBps) [2024-10-08T18:34:19.476Z] Copying: 787320/1048576 [kB] (9324 kBps) [2024-10-08T18:34:20.419Z] Copying: 779/1024 [MB] (10 MBps) [2024-10-08T18:34:21.358Z] Copying: 808040/1048576 [kB] (10144 kBps) [2024-10-08T18:34:22.297Z] Copying: 818028/1048576 [kB] (9988 kBps) [2024-10-08T18:34:23.238Z] Copying: 826952/1048576 [kB] (8924 kBps) [2024-10-08T18:34:24.187Z] Copying: 836040/1048576 [kB] (9088 kBps) [2024-10-08T18:34:25.573Z] Copying: 845204/1048576 [kB] (9164 kBps) [2024-10-08T18:34:26.147Z] Copying: 835/1024 [MB] (10 MBps) [2024-10-08T18:34:27.531Z] Copying: 865224/1048576 [kB] (9568 kBps) [2024-10-08T18:34:28.474Z] Copying: 855/1024 [MB] (10 MBps) [2024-10-08T18:34:29.421Z] Copying: 885232/1048576 [kB] (9536 kBps) [2024-10-08T18:34:30.367Z] Copying: 874/1024 [MB] (10 MBps) [2024-10-08T18:34:31.306Z] Copying: 905692/1048576 [kB] (9808 kBps) [2024-10-08T18:34:32.246Z] Copying: 895/1024 [MB] (10 MBps) [2024-10-08T18:34:33.229Z] Copying: 926632/1048576 [kB] (10016 kBps) [2024-10-08T18:34:34.268Z] Copying: 914/1024 [MB] (10 MBps) [2024-10-08T18:34:35.212Z] Copying: 946248/1048576 [kB] (9328 kBps) [2024-10-08T18:34:36.154Z] Copying: 955540/1048576 [kB] (9292 kBps) [2024-10-08T18:34:37.540Z] Copying: 965284/1048576 [kB] (9744 kBps) [2024-10-08T18:34:38.149Z] Copying: 975352/1048576 [kB] (10068 kBps) [2024-10-08T18:34:39.530Z] Copying: 962/1024 [MB] (10 MBps) [2024-10-08T18:34:40.472Z] Copying: 996240/1048576 [kB] (10180 kBps) [2024-10-08T18:34:41.414Z] Copying: 1005904/1048576 [kB] (9664 kBps) [2024-10-08T18:34:42.355Z] Copying: 1016036/1048576 [kB] (10132 kBps) [2024-10-08T18:34:43.343Z] Copying: 1026088/1048576 [kB] (10052 kBps) [2024-10-08T18:34:44.284Z] Copying: 1035008/1048576 [kB] (8920 kBps) [2024-10-08T18:34:44.545Z] Copying: 1044920/1048576 [kB] (9912 kBps) [2024-10-08T18:34:44.810Z] Copying: 1024/1024 [MB] (average 10 MBps) 00:23:55.960 00:23:55.960 18:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:55.960 18:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:56.221 18:34:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:56.484 [2024-10-08 18:34:45.146199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.146265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:56.484 [2024-10-08 18:34:45.146280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:56.484 [2024-10-08 18:34:45.146305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.146330] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:56.484 [2024-10-08 18:34:45.146821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.146843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:56.484 [2024-10-08 18:34:45.146857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:23:56.484 [2024-10-08 18:34:45.146865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.149486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.149524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:56.484 [2024-10-08 18:34:45.149537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:23:56.484 [2024-10-08 18:34:45.149545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.168270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.168328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:56.484 [2024-10-08 18:34:45.168348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.695 ms 00:23:56.484 [2024-10-08 18:34:45.168356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.175093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.175152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:56.484 [2024-10-08 18:34:45.175171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.676 ms 00:23:56.484 [2024-10-08 18:34:45.175179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.178388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.178436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:56.484 [2024-10-08 18:34:45.178449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.081 ms 00:23:56.484 [2024-10-08 18:34:45.178456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.183343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.183391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:56.484 [2024-10-08 18:34:45.183411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.842 ms 00:23:56.484 [2024-10-08 18:34:45.183419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.183574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.183586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:56.484 [2024-10-08 18:34:45.183597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:56.484 [2024-10-08 18:34:45.183604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.186669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.186711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:56.484 [2024-10-08 18:34:45.186724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.041 ms 00:23:56.484 [2024-10-08 18:34:45.186732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.188986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.189051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:56.484 [2024-10-08 18:34:45.189063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.199 ms 00:23:56.484 [2024-10-08 18:34:45.189072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.191021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.191156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:56.484 [2024-10-08 18:34:45.191175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.913 ms 00:23:56.484 [2024-10-08 18:34:45.191182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.192741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.484 [2024-10-08 18:34:45.192789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:56.484 [2024-10-08 18:34:45.192800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:23:56.484 [2024-10-08 18:34:45.192807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.484 [2024-10-08 18:34:45.192841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:56.484 [2024-10-08 18:34:45.192855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:56.484 [2024-10-08 18:34:45.192868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.192981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:56.485 [2024-10-08 18:34:45.193668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:56.486 [2024-10-08 18:34:45.193744] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:56.486 [2024-10-08 18:34:45.193767] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 586781a0-ef3b-4b7a-b872-155f1a622c6f 00:23:56.486 [2024-10-08 18:34:45.193778] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:56.486 [2024-10-08 18:34:45.193787] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:56.486 [2024-10-08 18:34:45.193794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:56.486 [2024-10-08 18:34:45.193805] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:56.486 [2024-10-08 18:34:45.193812] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:56.486 [2024-10-08 18:34:45.193821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:56.486 [2024-10-08 18:34:45.193828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:56.486 [2024-10-08 18:34:45.193837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:56.486 [2024-10-08 18:34:45.193843] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:56.486 [2024-10-08 18:34:45.193851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.486 [2024-10-08 18:34:45.193859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:56.486 [2024-10-08 18:34:45.193869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.012 ms 00:23:56.486 [2024-10-08 18:34:45.193876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.195366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.486 [2024-10-08 18:34:45.195387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:56.486 [2024-10-08 18:34:45.195397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:23:56.486 [2024-10-08 18:34:45.195409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.195487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.486 [2024-10-08 18:34:45.195495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:56.486 [2024-10-08 18:34:45.195505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:56.486 [2024-10-08 18:34:45.195514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.200783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.200822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.486 [2024-10-08 18:34:45.200838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.200846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.200912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.200920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.486 [2024-10-08 18:34:45.200929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.200938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.201041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.201051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.486 [2024-10-08 18:34:45.201061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.201068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.201087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.201095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.486 [2024-10-08 18:34:45.201103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.201110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.210623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.210664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.486 [2024-10-08 18:34:45.210676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.210684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.218542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.218585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.486 [2024-10-08 18:34:45.218598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.218605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.218706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.218715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.486 [2024-10-08 18:34:45.218725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.218732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.218796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.218806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.486 [2024-10-08 18:34:45.218816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.218823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.218894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.218910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.486 [2024-10-08 18:34:45.218920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.218927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.218958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.218966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:56.486 [2024-10-08 18:34:45.218976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.218983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.219021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.219031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.486 [2024-10-08 18:34:45.219040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.219047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.219092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.486 [2024-10-08 18:34:45.219105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.486 [2024-10-08 18:34:45.219114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.486 [2024-10-08 18:34:45.219121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.486 [2024-10-08 18:34:45.219246] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.020 ms, result 0 00:23:56.486 true 00:23:56.486 18:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 89813 00:23:56.486 18:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid89813 00:23:56.486 18:34:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:56.486 [2024-10-08 18:34:45.300154] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:23:56.486 [2024-10-08 18:34:45.300286] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91066 ] 00:23:56.746 [2024-10-08 18:34:45.430025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:56.746 [2024-10-08 18:34:45.446802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.746 [2024-10-08 18:34:45.482960] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.132  [2024-10-08T18:34:47.555Z] Copying: 192/1024 [MB] (192 MBps) [2024-10-08T18:34:48.942Z] Copying: 385/1024 [MB] (192 MBps) [2024-10-08T18:34:49.887Z] Copying: 576/1024 [MB] (191 MBps) [2024-10-08T18:34:50.561Z] Copying: 767/1024 [MB] (191 MBps) [2024-10-08T18:34:51.132Z] Copying: 955/1024 [MB] (188 MBps) [2024-10-08T18:34:51.132Z] Copying: 1024/1024 [MB] (average 191 MBps) 00:24:02.282 00:24:02.282 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 89813 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:02.282 18:34:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:02.543 [2024-10-08 18:34:51.130782] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:24:02.543 [2024-10-08 18:34:51.130936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91124 ] 00:24:02.543 [2024-10-08 18:34:51.260741] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:02.543 [2024-10-08 18:34:51.281745] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.543 [2024-10-08 18:34:51.316310] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.802 [2024-10-08 18:34:51.405052] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.802 [2024-10-08 18:34:51.405256] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:02.802 [2024-10-08 18:34:51.468899] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:02.802 [2024-10-08 18:34:51.469227] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:02.802 [2024-10-08 18:34:51.469963] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:03.373 [2024-10-08 18:34:52.026406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.026460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:03.373 [2024-10-08 18:34:52.026474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:03.373 [2024-10-08 18:34:52.026482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.026532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.026545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:03.373 [2024-10-08 18:34:52.026553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:03.373 [2024-10-08 18:34:52.026561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.026583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:03.373 [2024-10-08 18:34:52.026835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:03.373 [2024-10-08 18:34:52.026851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.026859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:03.373 [2024-10-08 18:34:52.026867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:24:03.373 [2024-10-08 18:34:52.026874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.027944] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:03.373 [2024-10-08 18:34:52.030896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.030930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:03.373 [2024-10-08 18:34:52.030942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:24:03.373 [2024-10-08 18:34:52.030950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.031013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.031023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:03.373 [2024-10-08 18:34:52.031033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:03.373 [2024-10-08 18:34:52.031041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.035933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.035961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:03.373 [2024-10-08 18:34:52.035972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.844 ms 00:24:03.373 [2024-10-08 18:34:52.035980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.036056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.036064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:03.373 [2024-10-08 18:34:52.036074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:03.373 [2024-10-08 18:34:52.036082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.036126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.373 [2024-10-08 18:34:52.036135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:03.373 [2024-10-08 18:34:52.036144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:03.373 [2024-10-08 18:34:52.036154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.373 [2024-10-08 18:34:52.036179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:03.373 [2024-10-08 18:34:52.037508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.374 [2024-10-08 18:34:52.037535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:03.374 [2024-10-08 18:34:52.037544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.334 ms 00:24:03.374 [2024-10-08 18:34:52.037551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.374 [2024-10-08 18:34:52.037582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.374 [2024-10-08 18:34:52.037590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:03.374 [2024-10-08 18:34:52.037598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:03.374 [2024-10-08 18:34:52.037607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.374 [2024-10-08 18:34:52.037627] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:03.374 [2024-10-08 18:34:52.037648] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:03.374 [2024-10-08 18:34:52.037682] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:03.374 [2024-10-08 18:34:52.037704] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:03.374 [2024-10-08 18:34:52.037822] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:03.374 [2024-10-08 18:34:52.037833] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:03.374 [2024-10-08 18:34:52.037847] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:03.374 [2024-10-08 18:34:52.037857] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:03.374 [2024-10-08 18:34:52.037865] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:03.374 [2024-10-08 18:34:52.037873] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:03.374 [2024-10-08 18:34:52.037880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:03.374 [2024-10-08 18:34:52.037887] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:03.374 [2024-10-08 18:34:52.037894] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:03.374 [2024-10-08 18:34:52.037904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.374 [2024-10-08 18:34:52.037914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:03.374 [2024-10-08 18:34:52.037925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:24:03.374 [2024-10-08 18:34:52.037932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.374 [2024-10-08 18:34:52.038014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.374 [2024-10-08 18:34:52.038022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:03.374 [2024-10-08 18:34:52.038033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:03.374 [2024-10-08 18:34:52.038040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.374 [2024-10-08 18:34:52.038136] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:03.374 [2024-10-08 18:34:52.038150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:03.374 [2024-10-08 18:34:52.038162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:03.374 [2024-10-08 18:34:52.038194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:03.374 [2024-10-08 18:34:52.038217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.374 [2024-10-08 18:34:52.038238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:03.374 [2024-10-08 18:34:52.038247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:03.374 [2024-10-08 18:34:52.038254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:03.374 [2024-10-08 18:34:52.038262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:03.374 [2024-10-08 18:34:52.038270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:03.374 [2024-10-08 18:34:52.038277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:03.374 [2024-10-08 18:34:52.038293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:03.374 [2024-10-08 18:34:52.038315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:03.374 [2024-10-08 18:34:52.038338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:03.374 [2024-10-08 18:34:52.038362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:03.374 [2024-10-08 18:34:52.038384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:03.374 [2024-10-08 18:34:52.038407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.374 [2024-10-08 18:34:52.038421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:03.374 [2024-10-08 18:34:52.038429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:03.374 [2024-10-08 18:34:52.038436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:03.374 [2024-10-08 18:34:52.038444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:03.374 [2024-10-08 18:34:52.038452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:03.374 [2024-10-08 18:34:52.038459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:03.374 [2024-10-08 18:34:52.038474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:03.374 [2024-10-08 18:34:52.038484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038492] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:03.374 [2024-10-08 18:34:52.038500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:03.374 [2024-10-08 18:34:52.038508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:03.374 [2024-10-08 18:34:52.038524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:03.374 [2024-10-08 18:34:52.038532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:03.374 [2024-10-08 18:34:52.038540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:03.374 [2024-10-08 18:34:52.038547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:03.374 [2024-10-08 18:34:52.038554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:03.374 [2024-10-08 18:34:52.038562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:03.374 [2024-10-08 18:34:52.038571] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:03.374 [2024-10-08 18:34:52.038581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:03.374 [2024-10-08 18:34:52.038598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:03.374 [2024-10-08 18:34:52.038606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:03.374 [2024-10-08 18:34:52.038616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:03.374 [2024-10-08 18:34:52.038624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:03.374 [2024-10-08 18:34:52.038633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:03.374 [2024-10-08 18:34:52.038641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:03.374 [2024-10-08 18:34:52.038648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:03.374 [2024-10-08 18:34:52.038656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:03.374 [2024-10-08 18:34:52.038664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:03.374 [2024-10-08 18:34:52.038705] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:03.374 [2024-10-08 18:34:52.038714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:03.374 [2024-10-08 18:34:52.038734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:03.374 [2024-10-08 18:34:52.038742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:03.375 [2024-10-08 18:34:52.038773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:03.375 [2024-10-08 18:34:52.038782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.038791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:03.375 [2024-10-08 18:34:52.038799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.714 ms 00:24:03.375 [2024-10-08 18:34:52.038807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.062163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.062468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:03.375 [2024-10-08 18:34:52.062562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.304 ms 00:24:03.375 [2024-10-08 18:34:52.062594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.062730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.062937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:03.375 [2024-10-08 18:34:52.063058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:24:03.375 [2024-10-08 18:34:52.063135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.071931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.072066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:03.375 [2024-10-08 18:34:52.072122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.689 ms 00:24:03.375 [2024-10-08 18:34:52.072149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.072223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.072266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:03.375 [2024-10-08 18:34:52.072305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:03.375 [2024-10-08 18:34:52.072338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.072732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.072808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:03.375 [2024-10-08 18:34:52.072834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:24:03.375 [2024-10-08 18:34:52.072857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.073034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.073127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:03.375 [2024-10-08 18:34:52.073156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:24:03.375 [2024-10-08 18:34:52.073184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.078408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.078530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:03.375 [2024-10-08 18:34:52.078587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:24:03.375 [2024-10-08 18:34:52.078617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.081460] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:03.375 [2024-10-08 18:34:52.081926] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:03.375 [2024-10-08 18:34:52.081948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.081959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:03.375 [2024-10-08 18:34:52.081968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:24:03.375 [2024-10-08 18:34:52.081976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.096542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.096589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:03.375 [2024-10-08 18:34:52.096612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.518 ms 00:24:03.375 [2024-10-08 18:34:52.096623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.099003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.099037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:03.375 [2024-10-08 18:34:52.099047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.332 ms 00:24:03.375 [2024-10-08 18:34:52.099054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.100768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.100795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:03.375 [2024-10-08 18:34:52.100805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:24:03.375 [2024-10-08 18:34:52.100818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.101205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.101229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:03.375 [2024-10-08 18:34:52.101240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:24:03.375 [2024-10-08 18:34:52.101248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.116537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.116716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:03.375 [2024-10-08 18:34:52.116735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.271 ms 00:24:03.375 [2024-10-08 18:34:52.116743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.124358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:03.375 [2024-10-08 18:34:52.127109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.127221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:03.375 [2024-10-08 18:34:52.127241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.309 ms 00:24:03.375 [2024-10-08 18:34:52.127251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.127323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.127333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:03.375 [2024-10-08 18:34:52.127341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:03.375 [2024-10-08 18:34:52.127351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.127444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.127455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:03.375 [2024-10-08 18:34:52.127468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:03.375 [2024-10-08 18:34:52.127475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.127497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.127509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:03.375 [2024-10-08 18:34:52.127517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:03.375 [2024-10-08 18:34:52.127525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.127556] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:03.375 [2024-10-08 18:34:52.127570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.127577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:03.375 [2024-10-08 18:34:52.127586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:03.375 [2024-10-08 18:34:52.127593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.132009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.132044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:03.375 [2024-10-08 18:34:52.132055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.396 ms 00:24:03.375 [2024-10-08 18:34:52.132064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.132141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:03.375 [2024-10-08 18:34:52.132153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:03.375 [2024-10-08 18:34:52.132163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:03.375 [2024-10-08 18:34:52.132174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:03.375 [2024-10-08 18:34:52.133080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.231 ms, result 0 00:24:04.318  [2024-10-08T18:34:54.558Z] Copying: 12/1024 [MB] (12 MBps) [2024-10-08T18:34:55.501Z] Copying: 23/1024 [MB] (10 MBps) [2024-10-08T18:34:56.445Z] Copying: 34/1024 [MB] (11 MBps) [2024-10-08T18:34:57.386Z] Copying: 46/1024 [MB] (11 MBps) [2024-10-08T18:34:58.327Z] Copying: 56/1024 [MB] (10 MBps) [2024-10-08T18:34:59.269Z] Copying: 67/1024 [MB] (11 MBps) [2024-10-08T18:35:00.212Z] Copying: 79416/1048576 [kB] (9988 kBps) [2024-10-08T18:35:01.154Z] Copying: 89136/1048576 [kB] (9720 kBps) [2024-10-08T18:35:02.155Z] Copying: 97/1024 [MB] (10 MBps) [2024-10-08T18:35:03.537Z] Copying: 109496/1048576 [kB] (9480 kBps) [2024-10-08T18:35:04.481Z] Copying: 119568/1048576 [kB] (10072 kBps) [2024-10-08T18:35:05.423Z] Copying: 128/1024 [MB] (11 MBps) [2024-10-08T18:35:06.404Z] Copying: 139/1024 [MB] (11 MBps) [2024-10-08T18:35:07.348Z] Copying: 149/1024 [MB] (10 MBps) [2024-10-08T18:35:08.288Z] Copying: 160/1024 [MB] (10 MBps) [2024-10-08T18:35:09.227Z] Copying: 170/1024 [MB] (10 MBps) [2024-10-08T18:35:10.168Z] Copying: 184/1024 [MB] (14 MBps) [2024-10-08T18:35:11.550Z] Copying: 194/1024 [MB] (10 MBps) [2024-10-08T18:35:12.500Z] Copying: 204/1024 [MB] (10 MBps) [2024-10-08T18:35:13.441Z] Copying: 215/1024 [MB] (10 MBps) [2024-10-08T18:35:14.457Z] Copying: 226/1024 [MB] (11 MBps) [2024-10-08T18:35:15.399Z] Copying: 239/1024 [MB] (13 MBps) [2024-10-08T18:35:16.337Z] Copying: 252/1024 [MB] (12 MBps) [2024-10-08T18:35:17.279Z] Copying: 270/1024 [MB] (17 MBps) [2024-10-08T18:35:18.222Z] Copying: 281640/1048576 [kB] (4856 kBps) [2024-10-08T18:35:19.165Z] Copying: 282208/1048576 [kB] (568 kBps) [2024-10-08T18:35:20.551Z] Copying: 289/1024 [MB] (13 MBps) [2024-10-08T18:35:21.494Z] Copying: 302/1024 [MB] (12 MBps) [2024-10-08T18:35:22.439Z] Copying: 313/1024 [MB] (10 MBps) [2024-10-08T18:35:23.383Z] Copying: 323/1024 [MB] (10 MBps) [2024-10-08T18:35:24.340Z] Copying: 341260/1048576 [kB] (10172 kBps) [2024-10-08T18:35:25.282Z] Copying: 343/1024 [MB] (10 MBps) [2024-10-08T18:35:26.226Z] Copying: 359148/1048576 [kB] (7628 kBps) [2024-10-08T18:35:27.166Z] Copying: 361/1024 [MB] (10 MBps) [2024-10-08T18:35:28.548Z] Copying: 371/1024 [MB] (10 MBps) [2024-10-08T18:35:29.489Z] Copying: 388/1024 [MB] (16 MBps) [2024-10-08T18:35:30.464Z] Copying: 401/1024 [MB] (13 MBps) [2024-10-08T18:35:31.406Z] Copying: 412/1024 [MB] (11 MBps) [2024-10-08T18:35:32.376Z] Copying: 424/1024 [MB] (11 MBps) [2024-10-08T18:35:33.319Z] Copying: 434/1024 [MB] (10 MBps) [2024-10-08T18:35:34.263Z] Copying: 444/1024 [MB] (10 MBps) [2024-10-08T18:35:35.208Z] Copying: 455/1024 [MB] (11 MBps) [2024-10-08T18:35:36.152Z] Copying: 467/1024 [MB] (12 MBps) [2024-10-08T18:35:37.539Z] Copying: 478/1024 [MB] (10 MBps) [2024-10-08T18:35:38.483Z] Copying: 488/1024 [MB] (10 MBps) [2024-10-08T18:35:39.426Z] Copying: 510180/1048576 [kB] (10076 kBps) [2024-10-08T18:35:40.369Z] Copying: 508/1024 [MB] (10 MBps) [2024-10-08T18:35:41.310Z] Copying: 519/1024 [MB] (10 MBps) [2024-10-08T18:35:42.251Z] Copying: 530/1024 [MB] (11 MBps) [2024-10-08T18:35:43.194Z] Copying: 553604/1048576 [kB] (10144 kBps) [2024-10-08T18:35:44.577Z] Copying: 552/1024 [MB] (11 MBps) [2024-10-08T18:35:45.161Z] Copying: 566/1024 [MB] (14 MBps) [2024-10-08T18:35:46.174Z] Copying: 577/1024 [MB] (10 MBps) [2024-10-08T18:35:47.556Z] Copying: 589/1024 [MB] (11 MBps) [2024-10-08T18:35:48.496Z] Copying: 600/1024 [MB] (10 MBps) [2024-10-08T18:35:49.436Z] Copying: 611/1024 [MB] (11 MBps) [2024-10-08T18:35:50.420Z] Copying: 622/1024 [MB] (10 MBps) [2024-10-08T18:35:51.361Z] Copying: 632/1024 [MB] (10 MBps) [2024-10-08T18:35:52.304Z] Copying: 643/1024 [MB] (11 MBps) [2024-10-08T18:35:53.248Z] Copying: 669612/1048576 [kB] (10208 kBps) [2024-10-08T18:35:54.187Z] Copying: 667/1024 [MB] (13 MBps) [2024-10-08T18:35:55.165Z] Copying: 681/1024 [MB] (14 MBps) [2024-10-08T18:35:56.548Z] Copying: 695/1024 [MB] (13 MBps) [2024-10-08T18:35:57.504Z] Copying: 711/1024 [MB] (15 MBps) [2024-10-08T18:35:58.458Z] Copying: 721/1024 [MB] (10 MBps) [2024-10-08T18:35:59.400Z] Copying: 732/1024 [MB] (10 MBps) [2024-10-08T18:36:00.343Z] Copying: 743/1024 [MB] (11 MBps) [2024-10-08T18:36:01.286Z] Copying: 753/1024 [MB] (10 MBps) [2024-10-08T18:36:02.277Z] Copying: 764/1024 [MB] (10 MBps) [2024-10-08T18:36:03.218Z] Copying: 775/1024 [MB] (10 MBps) [2024-10-08T18:36:04.165Z] Copying: 803568/1048576 [kB] (9804 kBps) [2024-10-08T18:36:05.552Z] Copying: 794/1024 [MB] (10 MBps) [2024-10-08T18:36:06.498Z] Copying: 807/1024 [MB] (12 MBps) [2024-10-08T18:36:07.438Z] Copying: 819/1024 [MB] (12 MBps) [2024-10-08T18:36:08.384Z] Copying: 829/1024 [MB] (10 MBps) [2024-10-08T18:36:09.327Z] Copying: 859520/1048576 [kB] (9684 kBps) [2024-10-08T18:36:10.267Z] Copying: 868712/1048576 [kB] (9192 kBps) [2024-10-08T18:36:11.210Z] Copying: 878612/1048576 [kB] (9900 kBps) [2024-10-08T18:36:12.150Z] Copying: 888256/1048576 [kB] (9644 kBps) [2024-10-08T18:36:13.536Z] Copying: 898012/1048576 [kB] (9756 kBps) [2024-10-08T18:36:14.483Z] Copying: 907584/1048576 [kB] (9572 kBps) [2024-10-08T18:36:15.425Z] Copying: 916744/1048576 [kB] (9160 kBps) [2024-10-08T18:36:16.368Z] Copying: 926064/1048576 [kB] (9320 kBps) [2024-10-08T18:36:17.318Z] Copying: 935856/1048576 [kB] (9792 kBps) [2024-10-08T18:36:18.258Z] Copying: 945776/1048576 [kB] (9920 kBps) [2024-10-08T18:36:19.224Z] Copying: 955376/1048576 [kB] (9600 kBps) [2024-10-08T18:36:20.166Z] Copying: 965516/1048576 [kB] (10140 kBps) [2024-10-08T18:36:21.554Z] Copying: 953/1024 [MB] (10 MBps) [2024-10-08T18:36:22.498Z] Copying: 985632/1048576 [kB] (9580 kBps) [2024-10-08T18:36:23.444Z] Copying: 972/1024 [MB] (10 MBps) [2024-10-08T18:36:24.383Z] Copying: 1006068/1048576 [kB] (10148 kBps) [2024-10-08T18:36:25.420Z] Copying: 1016304/1048576 [kB] (10236 kBps) [2024-10-08T18:36:26.360Z] Copying: 1005/1024 [MB] (12 MBps) [2024-10-08T18:36:26.932Z] Copying: 1017/1024 [MB] (12 MBps) [2024-10-08T18:36:26.932Z] Copying: 1024/1024 [MB] (average 10 MBps)[2024-10-08 18:36:26.658174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.658292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:38.082 [2024-10-08 18:36:26.658354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:38.082 [2024-10-08 18:36:26.658378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.658460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.082 [2024-10-08 18:36:26.658945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.659052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:38.082 [2024-10-08 18:36:26.659112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:25:38.082 [2024-10-08 18:36:26.659135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.661225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.661327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:38.082 [2024-10-08 18:36:26.661379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:25:38.082 [2024-10-08 18:36:26.661424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.677131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.677235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:38.082 [2024-10-08 18:36:26.677327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.666 ms 00:25:38.082 [2024-10-08 18:36:26.677350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.683517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.683608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:38.082 [2024-10-08 18:36:26.683654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.124 ms 00:25:38.082 [2024-10-08 18:36:26.683676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.685600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.685693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:38.082 [2024-10-08 18:36:26.685780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:25:38.082 [2024-10-08 18:36:26.685791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.689507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.689596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:38.082 [2024-10-08 18:36:26.689650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.654 ms 00:25:38.082 [2024-10-08 18:36:26.689671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.692057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.692148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:38.082 [2024-10-08 18:36:26.692195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:25:38.082 [2024-10-08 18:36:26.692217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.694390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.694487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:38.082 [2024-10-08 18:36:26.694534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:25:38.082 [2024-10-08 18:36:26.694557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.696807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.696908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:38.082 [2024-10-08 18:36:26.696955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:25:38.082 [2024-10-08 18:36:26.696995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.698884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.698979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.082 [2024-10-08 18:36:26.699033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.806 ms 00:25:38.082 [2024-10-08 18:36:26.699072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.700534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.082 [2024-10-08 18:36:26.700629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.082 [2024-10-08 18:36:26.700675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:25:38.082 [2024-10-08 18:36:26.700715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.082 [2024-10-08 18:36:26.700800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.082 [2024-10-08 18:36:26.700872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 1024 / 261120 wr_cnt: 1 state: open 00:25:38.082 [2024-10-08 18:36:26.700961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.700993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.701953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.082 [2024-10-08 18:36:26.702775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.702804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.702833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.702886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.702915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.702943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.703994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.083 [2024-10-08 18:36:26.704049] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.083 [2024-10-08 18:36:26.704061] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 586781a0-ef3b-4b7a-b872-155f1a622c6f 00:25:38.083 [2024-10-08 18:36:26.704069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 1024 00:25:38.083 [2024-10-08 18:36:26.704080] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1984 00:25:38.083 [2024-10-08 18:36:26.704086] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1024 00:25:38.083 [2024-10-08 18:36:26.704094] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.9375 00:25:38.083 [2024-10-08 18:36:26.704101] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.083 [2024-10-08 18:36:26.704109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.083 [2024-10-08 18:36:26.704116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.083 [2024-10-08 18:36:26.704122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.083 [2024-10-08 18:36:26.704128] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.083 [2024-10-08 18:36:26.704136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-10-08 18:36:26.704143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.083 [2024-10-08 18:36:26.704151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:25:38.083 [2024-10-08 18:36:26.704157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.705664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-10-08 18:36:26.705695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.083 [2024-10-08 18:36:26.705705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:25:38.083 [2024-10-08 18:36:26.705711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.705810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.083 [2024-10-08 18:36:26.705825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.083 [2024-10-08 18:36:26.705836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:38.083 [2024-10-08 18:36:26.705843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.710160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-10-08 18:36:26.710194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.083 [2024-10-08 18:36:26.710209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-10-08 18:36:26.710215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.710262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-10-08 18:36:26.710270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.083 [2024-10-08 18:36:26.710280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-10-08 18:36:26.710293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.710332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-10-08 18:36:26.710341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.083 [2024-10-08 18:36:26.710348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-10-08 18:36:26.710355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.710369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-10-08 18:36:26.710376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.083 [2024-10-08 18:36:26.710383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-10-08 18:36:26.710392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.718708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.083 [2024-10-08 18:36:26.718746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.083 [2024-10-08 18:36:26.718770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.083 [2024-10-08 18:36:26.718778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.083 [2024-10-08 18:36:26.725552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.084 [2024-10-08 18:36:26.725603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.084 [2024-10-08 18:36:26.725675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.084 [2024-10-08 18:36:26.725720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.084 [2024-10-08 18:36:26.725816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.084 [2024-10-08 18:36:26.725869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.084 [2024-10-08 18:36:26.725928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.725936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.725974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.084 [2024-10-08 18:36:26.725989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.084 [2024-10-08 18:36:26.725997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.084 [2024-10-08 18:36:26.726005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.084 [2024-10-08 18:36:26.726111] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.913 ms, result 0 00:25:39.026 00:25:39.026 00:25:39.026 18:36:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:40.938 18:36:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:40.938 [2024-10-08 18:36:29.781157] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:25:40.938 [2024-10-08 18:36:29.781271] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92132 ] 00:25:41.221 [2024-10-08 18:36:29.921155] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:41.221 [2024-10-08 18:36:29.941819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.221 [2024-10-08 18:36:29.975609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.221 [2024-10-08 18:36:30.063525] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.221 [2024-10-08 18:36:30.063588] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.482 [2024-10-08 18:36:30.221768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.482 [2024-10-08 18:36:30.221824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:41.482 [2024-10-08 18:36:30.221840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:41.482 [2024-10-08 18:36:30.221849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.482 [2024-10-08 18:36:30.221904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.482 [2024-10-08 18:36:30.221915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.482 [2024-10-08 18:36:30.221923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:41.482 [2024-10-08 18:36:30.221930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.482 [2024-10-08 18:36:30.221952] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:41.482 [2024-10-08 18:36:30.222206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:41.482 [2024-10-08 18:36:30.222221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.482 [2024-10-08 18:36:30.222232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.482 [2024-10-08 18:36:30.222240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:25:41.482 [2024-10-08 18:36:30.222250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.223373] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:41.483 [2024-10-08 18:36:30.225907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.225939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:41.483 [2024-10-08 18:36:30.225955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:25:41.483 [2024-10-08 18:36:30.225962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.226020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.226029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:41.483 [2024-10-08 18:36:30.226038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:41.483 [2024-10-08 18:36:30.226045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.230905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.230932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.483 [2024-10-08 18:36:30.230943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:25:41.483 [2024-10-08 18:36:30.230955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.231033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.231044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.483 [2024-10-08 18:36:30.231054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:41.483 [2024-10-08 18:36:30.231062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.231101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.231111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:41.483 [2024-10-08 18:36:30.231118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:41.483 [2024-10-08 18:36:30.231125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.231150] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:41.483 [2024-10-08 18:36:30.232508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.232528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.483 [2024-10-08 18:36:30.232537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:25:41.483 [2024-10-08 18:36:30.232544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.232571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.232580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:41.483 [2024-10-08 18:36:30.232587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:41.483 [2024-10-08 18:36:30.232595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.232619] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:41.483 [2024-10-08 18:36:30.232636] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:41.483 [2024-10-08 18:36:30.232671] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:41.483 [2024-10-08 18:36:30.232685] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:41.483 [2024-10-08 18:36:30.232804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:41.483 [2024-10-08 18:36:30.232815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:41.483 [2024-10-08 18:36:30.232842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:41.483 [2024-10-08 18:36:30.232854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:41.483 [2024-10-08 18:36:30.232862] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:41.483 [2024-10-08 18:36:30.232870] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:41.483 [2024-10-08 18:36:30.232878] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:41.483 [2024-10-08 18:36:30.232885] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:41.483 [2024-10-08 18:36:30.232892] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:41.483 [2024-10-08 18:36:30.232899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.232910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:41.483 [2024-10-08 18:36:30.232918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:25:41.483 [2024-10-08 18:36:30.232927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.233011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.483 [2024-10-08 18:36:30.233044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:41.483 [2024-10-08 18:36:30.233055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:41.483 [2024-10-08 18:36:30.233062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.483 [2024-10-08 18:36:30.233159] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:41.483 [2024-10-08 18:36:30.233169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:41.483 [2024-10-08 18:36:30.233178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:41.483 [2024-10-08 18:36:30.233203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:41.483 [2024-10-08 18:36:30.233232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.483 [2024-10-08 18:36:30.233249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:41.483 [2024-10-08 18:36:30.233256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:41.483 [2024-10-08 18:36:30.233263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.483 [2024-10-08 18:36:30.233272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:41.483 [2024-10-08 18:36:30.233280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:41.483 [2024-10-08 18:36:30.233287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:41.483 [2024-10-08 18:36:30.233303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:41.483 [2024-10-08 18:36:30.233325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:41.483 [2024-10-08 18:36:30.233347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:41.483 [2024-10-08 18:36:30.233375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:41.483 [2024-10-08 18:36:30.233397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:41.483 [2024-10-08 18:36:30.233420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.483 [2024-10-08 18:36:30.233434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:41.483 [2024-10-08 18:36:30.233442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:41.483 [2024-10-08 18:36:30.233449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.483 [2024-10-08 18:36:30.233457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:41.483 [2024-10-08 18:36:30.233464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:41.483 [2024-10-08 18:36:30.233471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:41.483 [2024-10-08 18:36:30.233486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:41.483 [2024-10-08 18:36:30.233494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233501] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:41.483 [2024-10-08 18:36:30.233509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:41.483 [2024-10-08 18:36:30.233519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.483 [2024-10-08 18:36:30.233534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:41.483 [2024-10-08 18:36:30.233540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:41.483 [2024-10-08 18:36:30.233547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:41.483 [2024-10-08 18:36:30.233554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:41.483 [2024-10-08 18:36:30.233561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:41.483 [2024-10-08 18:36:30.233567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:41.483 [2024-10-08 18:36:30.233575] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:41.483 [2024-10-08 18:36:30.233588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.483 [2024-10-08 18:36:30.233596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:41.483 [2024-10-08 18:36:30.233603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:41.483 [2024-10-08 18:36:30.233611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:41.484 [2024-10-08 18:36:30.233619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:41.484 [2024-10-08 18:36:30.233626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:41.484 [2024-10-08 18:36:30.233633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:41.484 [2024-10-08 18:36:30.233640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:41.484 [2024-10-08 18:36:30.233647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:41.484 [2024-10-08 18:36:30.233653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:41.484 [2024-10-08 18:36:30.233660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:41.484 [2024-10-08 18:36:30.233697] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:41.484 [2024-10-08 18:36:30.233708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:41.484 [2024-10-08 18:36:30.233723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:41.484 [2024-10-08 18:36:30.233730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:41.484 [2024-10-08 18:36:30.233739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:41.484 [2024-10-08 18:36:30.233746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.233785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:41.484 [2024-10-08 18:36:30.233794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:25:41.484 [2024-10-08 18:36:30.233802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.248914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.249085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.484 [2024-10-08 18:36:30.249105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.063 ms 00:25:41.484 [2024-10-08 18:36:30.249114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.249212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.249221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:41.484 [2024-10-08 18:36:30.249237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:41.484 [2024-10-08 18:36:30.249245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.257471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.257505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.484 [2024-10-08 18:36:30.257516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.162 ms 00:25:41.484 [2024-10-08 18:36:30.257525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.257558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.257567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.484 [2024-10-08 18:36:30.257576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:41.484 [2024-10-08 18:36:30.257588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.257954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.257970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.484 [2024-10-08 18:36:30.257979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:25:41.484 [2024-10-08 18:36:30.257987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.258103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.258112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.484 [2024-10-08 18:36:30.258120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:25:41.484 [2024-10-08 18:36:30.258127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.262666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.262699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.484 [2024-10-08 18:36:30.262708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.519 ms 00:25:41.484 [2024-10-08 18:36:30.262715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.265649] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:25:41.484 [2024-10-08 18:36:30.265683] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:41.484 [2024-10-08 18:36:30.265695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.265703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:41.484 [2024-10-08 18:36:30.265712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.869 ms 00:25:41.484 [2024-10-08 18:36:30.265726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.288905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.288963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:41.484 [2024-10-08 18:36:30.288977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.132 ms 00:25:41.484 [2024-10-08 18:36:30.288986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.290831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.290945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:41.484 [2024-10-08 18:36:30.290961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:25:41.484 [2024-10-08 18:36:30.290969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.292797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.292823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:41.484 [2024-10-08 18:36:30.292832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:25:41.484 [2024-10-08 18:36:30.292839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.293186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.293198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:41.484 [2024-10-08 18:36:30.293212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:41.484 [2024-10-08 18:36:30.293223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.308631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.308683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:41.484 [2024-10-08 18:36:30.308695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.389 ms 00:25:41.484 [2024-10-08 18:36:30.308703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.316168] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:41.484 [2024-10-08 18:36:30.318613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.318734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:41.484 [2024-10-08 18:36:30.318779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.873 ms 00:25:41.484 [2024-10-08 18:36:30.318787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.318849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.318860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:41.484 [2024-10-08 18:36:30.318868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:41.484 [2024-10-08 18:36:30.318876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.319441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.319460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:41.484 [2024-10-08 18:36:30.319472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:25:41.484 [2024-10-08 18:36:30.319482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.319506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.319519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:41.484 [2024-10-08 18:36:30.319530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:41.484 [2024-10-08 18:36:30.319538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.319567] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:41.484 [2024-10-08 18:36:30.319576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.319584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:41.484 [2024-10-08 18:36:30.319595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:41.484 [2024-10-08 18:36:30.319606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.323272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.323303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:41.484 [2024-10-08 18:36:30.323313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:25:41.484 [2024-10-08 18:36:30.323321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.323386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.484 [2024-10-08 18:36:30.323395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:41.484 [2024-10-08 18:36:30.323403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:41.484 [2024-10-08 18:36:30.323410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.484 [2024-10-08 18:36:30.324449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 102.306 ms, result 0 00:25:42.863  [2024-10-08T18:36:32.653Z] Copying: 1000/1048576 [kB] (1000 kBps) [2024-10-08T18:36:33.596Z] Copying: 2116/1048576 [kB] (1116 kBps) [2024-10-08T18:36:35.015Z] Copying: 4780/1048576 [kB] (2664 kBps) [2024-10-08T18:36:35.585Z] Copying: 19/1024 [MB] (14 MBps) [2024-10-08T18:36:36.969Z] Copying: 37/1024 [MB] (18 MBps) [2024-10-08T18:36:37.539Z] Copying: 55/1024 [MB] (17 MBps) [2024-10-08T18:36:38.944Z] Copying: 74/1024 [MB] (19 MBps) [2024-10-08T18:36:39.887Z] Copying: 92/1024 [MB] (17 MBps) [2024-10-08T18:36:40.830Z] Copying: 105/1024 [MB] (13 MBps) [2024-10-08T18:36:41.786Z] Copying: 119/1024 [MB] (13 MBps) [2024-10-08T18:36:42.729Z] Copying: 135/1024 [MB] (16 MBps) [2024-10-08T18:36:43.729Z] Copying: 148/1024 [MB] (12 MBps) [2024-10-08T18:36:44.672Z] Copying: 160/1024 [MB] (12 MBps) [2024-10-08T18:36:45.621Z] Copying: 173/1024 [MB] (12 MBps) [2024-10-08T18:36:46.564Z] Copying: 186/1024 [MB] (13 MBps) [2024-10-08T18:36:47.952Z] Copying: 199/1024 [MB] (13 MBps) [2024-10-08T18:36:48.898Z] Copying: 213/1024 [MB] (14 MBps) [2024-10-08T18:36:49.843Z] Copying: 227/1024 [MB] (14 MBps) [2024-10-08T18:36:50.789Z] Copying: 242/1024 [MB] (14 MBps) [2024-10-08T18:36:51.732Z] Copying: 257/1024 [MB] (14 MBps) [2024-10-08T18:36:52.675Z] Copying: 272/1024 [MB] (14 MBps) [2024-10-08T18:36:53.622Z] Copying: 285/1024 [MB] (13 MBps) [2024-10-08T18:36:54.563Z] Copying: 298/1024 [MB] (12 MBps) [2024-10-08T18:36:55.949Z] Copying: 312/1024 [MB] (14 MBps) [2024-10-08T18:36:56.889Z] Copying: 326/1024 [MB] (14 MBps) [2024-10-08T18:36:57.829Z] Copying: 341/1024 [MB] (14 MBps) [2024-10-08T18:36:58.801Z] Copying: 355/1024 [MB] (13 MBps) [2024-10-08T18:36:59.743Z] Copying: 369/1024 [MB] (14 MBps) [2024-10-08T18:37:00.684Z] Copying: 383/1024 [MB] (14 MBps) [2024-10-08T18:37:01.637Z] Copying: 397/1024 [MB] (13 MBps) [2024-10-08T18:37:02.580Z] Copying: 411/1024 [MB] (13 MBps) [2024-10-08T18:37:03.968Z] Copying: 425/1024 [MB] (14 MBps) [2024-10-08T18:37:04.941Z] Copying: 439/1024 [MB] (14 MBps) [2024-10-08T18:37:05.884Z] Copying: 453/1024 [MB] (13 MBps) [2024-10-08T18:37:06.825Z] Copying: 466/1024 [MB] (13 MBps) [2024-10-08T18:37:07.765Z] Copying: 480/1024 [MB] (14 MBps) [2024-10-08T18:37:08.697Z] Copying: 496/1024 [MB] (15 MBps) [2024-10-08T18:37:09.636Z] Copying: 531/1024 [MB] (35 MBps) [2024-10-08T18:37:10.575Z] Copying: 563/1024 [MB] (31 MBps) [2024-10-08T18:37:11.969Z] Copying: 586/1024 [MB] (23 MBps) [2024-10-08T18:37:12.541Z] Copying: 601/1024 [MB] (14 MBps) [2024-10-08T18:37:13.923Z] Copying: 623/1024 [MB] (22 MBps) [2024-10-08T18:37:14.914Z] Copying: 642/1024 [MB] (19 MBps) [2024-10-08T18:37:15.869Z] Copying: 657/1024 [MB] (14 MBps) [2024-10-08T18:37:16.829Z] Copying: 681/1024 [MB] (23 MBps) [2024-10-08T18:37:17.795Z] Copying: 703/1024 [MB] (22 MBps) [2024-10-08T18:37:18.774Z] Copying: 724/1024 [MB] (20 MBps) [2024-10-08T18:37:19.718Z] Copying: 742/1024 [MB] (18 MBps) [2024-10-08T18:37:20.659Z] Copying: 763/1024 [MB] (20 MBps) [2024-10-08T18:37:21.640Z] Copying: 788/1024 [MB] (24 MBps) [2024-10-08T18:37:22.575Z] Copying: 818/1024 [MB] (30 MBps) [2024-10-08T18:37:23.549Z] Copying: 849/1024 [MB] (31 MBps) [2024-10-08T18:37:24.935Z] Copying: 883/1024 [MB] (33 MBps) [2024-10-08T18:37:25.878Z] Copying: 898/1024 [MB] (14 MBps) [2024-10-08T18:37:26.820Z] Copying: 912/1024 [MB] (14 MBps) [2024-10-08T18:37:27.761Z] Copying: 926/1024 [MB] (13 MBps) [2024-10-08T18:37:28.712Z] Copying: 940/1024 [MB] (14 MBps) [2024-10-08T18:37:29.655Z] Copying: 955/1024 [MB] (14 MBps) [2024-10-08T18:37:30.594Z] Copying: 969/1024 [MB] (14 MBps) [2024-10-08T18:37:31.979Z] Copying: 983/1024 [MB] (13 MBps) [2024-10-08T18:37:32.552Z] Copying: 997/1024 [MB] (13 MBps) [2024-10-08T18:37:33.496Z] Copying: 1011/1024 [MB] (14 MBps) [2024-10-08T18:37:33.757Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-08 18:37:33.513440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.513532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:44.907 [2024-10-08 18:37:33.513553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:44.907 [2024-10-08 18:37:33.513564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.513591] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:44.907 [2024-10-08 18:37:33.514392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.514422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:44.907 [2024-10-08 18:37:33.514445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.783 ms 00:26:44.907 [2024-10-08 18:37:33.514460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.514881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.514903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:44.907 [2024-10-08 18:37:33.514914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:26:44.907 [2024-10-08 18:37:33.514924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.533207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.533261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:44.907 [2024-10-08 18:37:33.533291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.261 ms 00:26:44.907 [2024-10-08 18:37:33.533304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.539597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.539641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:44.907 [2024-10-08 18:37:33.539653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.261 ms 00:26:44.907 [2024-10-08 18:37:33.539663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.542929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.542978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:44.907 [2024-10-08 18:37:33.542990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.186 ms 00:26:44.907 [2024-10-08 18:37:33.542999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.548436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.548487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:44.907 [2024-10-08 18:37:33.548499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.391 ms 00:26:44.907 [2024-10-08 18:37:33.548518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.553902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.554101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:44.907 [2024-10-08 18:37:33.554123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.336 ms 00:26:44.907 [2024-10-08 18:37:33.554145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.557355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.557535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:44.907 [2024-10-08 18:37:33.557555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:26:44.907 [2024-10-08 18:37:33.557564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.560370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.560419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:44.907 [2024-10-08 18:37:33.560428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.765 ms 00:26:44.907 [2024-10-08 18:37:33.560436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.562586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.562767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:44.907 [2024-10-08 18:37:33.562873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:26:44.907 [2024-10-08 18:37:33.562913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.565050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.907 [2024-10-08 18:37:33.565119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:44.907 [2024-10-08 18:37:33.565132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.986 ms 00:26:44.907 [2024-10-08 18:37:33.565141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.907 [2024-10-08 18:37:33.565182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:44.907 [2024-10-08 18:37:33.565199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:44.907 [2024-10-08 18:37:33.565212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:44.907 [2024-10-08 18:37:33.565222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.907 [2024-10-08 18:37:33.565285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:44.908 [2024-10-08 18:37:33.565997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:44.909 [2024-10-08 18:37:33.566138] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:44.909 [2024-10-08 18:37:33.566147] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 586781a0-ef3b-4b7a-b872-155f1a622c6f 00:26:44.909 [2024-10-08 18:37:33.566168] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:44.909 [2024-10-08 18:37:33.566181] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263616 00:26:44.909 [2024-10-08 18:37:33.566193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261632 00:26:44.909 [2024-10-08 18:37:33.566209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:26:44.909 [2024-10-08 18:37:33.566217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:44.909 [2024-10-08 18:37:33.566227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:44.909 [2024-10-08 18:37:33.566236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:44.909 [2024-10-08 18:37:33.566243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:44.909 [2024-10-08 18:37:33.566251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:44.909 [2024-10-08 18:37:33.566261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.909 [2024-10-08 18:37:33.566274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:44.909 [2024-10-08 18:37:33.566284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:26:44.909 [2024-10-08 18:37:33.566293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.568859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.909 [2024-10-08 18:37:33.568891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:44.909 [2024-10-08 18:37:33.568901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.546 ms 00:26:44.909 [2024-10-08 18:37:33.568919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.569045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:44.909 [2024-10-08 18:37:33.569055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:44.909 [2024-10-08 18:37:33.569074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:26:44.909 [2024-10-08 18:37:33.569084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.576345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.576552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:44.909 [2024-10-08 18:37:33.576581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.576594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.576662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.576672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:44.909 [2024-10-08 18:37:33.576683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.576698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.576799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.576811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:44.909 [2024-10-08 18:37:33.576820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.576829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.576845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.576853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:44.909 [2024-10-08 18:37:33.576861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.576869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.591394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.591631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:44.909 [2024-10-08 18:37:33.591653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.591662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.603255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.603443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:44.909 [2024-10-08 18:37:33.604004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.604149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.604265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.604363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:44.909 [2024-10-08 18:37:33.604393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.604448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.604519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.604543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:44.909 [2024-10-08 18:37:33.604606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.604638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.604785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.604896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:44.909 [2024-10-08 18:37:33.604964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.604994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.605146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.605254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:44.909 [2024-10-08 18:37:33.605322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.605352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.605463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.605537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:44.909 [2024-10-08 18:37:33.605598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.605627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.605732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:44.909 [2024-10-08 18:37:33.605833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:44.909 [2024-10-08 18:37:33.605898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:44.909 [2024-10-08 18:37:33.605927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:44.909 [2024-10-08 18:37:33.606140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 92.663 ms, result 0 00:26:45.170 00:26:45.170 00:26:45.170 18:37:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:47.718 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:47.718 18:37:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:47.718 [2024-10-08 18:37:36.162658] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:26:47.718 [2024-10-08 18:37:36.163259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92800 ] 00:26:47.718 [2024-10-08 18:37:36.299859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:47.718 [2024-10-08 18:37:36.320515] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.718 [2024-10-08 18:37:36.379502] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.718 [2024-10-08 18:37:36.534703] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:47.718 [2024-10-08 18:37:36.534913] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:47.980 [2024-10-08 18:37:36.700911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.700988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:47.980 [2024-10-08 18:37:36.701006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:47.980 [2024-10-08 18:37:36.701016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.701080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.701092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:47.980 [2024-10-08 18:37:36.701119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:47.980 [2024-10-08 18:37:36.701127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.701151] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:47.980 [2024-10-08 18:37:36.701440] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:47.980 [2024-10-08 18:37:36.701456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.701465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:47.980 [2024-10-08 18:37:36.701478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:26:47.980 [2024-10-08 18:37:36.701488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.703230] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:47.980 [2024-10-08 18:37:36.706878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.706938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:47.980 [2024-10-08 18:37:36.706951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.651 ms 00:26:47.980 [2024-10-08 18:37:36.706960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.707050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.707061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:47.980 [2024-10-08 18:37:36.707071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:47.980 [2024-10-08 18:37:36.707080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.715313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.715361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:47.980 [2024-10-08 18:37:36.715375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.189 ms 00:26:47.980 [2024-10-08 18:37:36.715397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.715493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.715503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:47.980 [2024-10-08 18:37:36.715514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:47.980 [2024-10-08 18:37:36.715523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.715589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.715599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:47.980 [2024-10-08 18:37:36.715608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:47.980 [2024-10-08 18:37:36.715616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.715645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:47.980 [2024-10-08 18:37:36.717787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.717823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:47.980 [2024-10-08 18:37:36.717834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.149 ms 00:26:47.980 [2024-10-08 18:37:36.717842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.717879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.717887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:47.980 [2024-10-08 18:37:36.717897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:26:47.980 [2024-10-08 18:37:36.717915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.717943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:47.980 [2024-10-08 18:37:36.717964] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:47.980 [2024-10-08 18:37:36.718002] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:47.980 [2024-10-08 18:37:36.718023] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:47.980 [2024-10-08 18:37:36.718129] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:47.980 [2024-10-08 18:37:36.718141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:47.980 [2024-10-08 18:37:36.718152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:47.980 [2024-10-08 18:37:36.718165] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:47.980 [2024-10-08 18:37:36.718175] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:47.980 [2024-10-08 18:37:36.718187] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:47.980 [2024-10-08 18:37:36.718199] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:47.980 [2024-10-08 18:37:36.718207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:47.980 [2024-10-08 18:37:36.718215] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:47.980 [2024-10-08 18:37:36.718223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.718231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:47.980 [2024-10-08 18:37:36.718240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:26:47.980 [2024-10-08 18:37:36.718249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.718333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.980 [2024-10-08 18:37:36.718344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:47.980 [2024-10-08 18:37:36.718352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:47.980 [2024-10-08 18:37:36.718359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.980 [2024-10-08 18:37:36.718462] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:47.981 [2024-10-08 18:37:36.718474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:47.981 [2024-10-08 18:37:36.718484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:47.981 [2024-10-08 18:37:36.718510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:47.981 [2024-10-08 18:37:36.718544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.981 [2024-10-08 18:37:36.718565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:47.981 [2024-10-08 18:37:36.718573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:47.981 [2024-10-08 18:37:36.718581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:47.981 [2024-10-08 18:37:36.718588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:47.981 [2024-10-08 18:37:36.718596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:47.981 [2024-10-08 18:37:36.718604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:47.981 [2024-10-08 18:37:36.718621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:47.981 [2024-10-08 18:37:36.718644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:47.981 [2024-10-08 18:37:36.718667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:47.981 [2024-10-08 18:37:36.718697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:47.981 [2024-10-08 18:37:36.718720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:47.981 [2024-10-08 18:37:36.718744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.981 [2024-10-08 18:37:36.718786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:47.981 [2024-10-08 18:37:36.718794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:47.981 [2024-10-08 18:37:36.718801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:47.981 [2024-10-08 18:37:36.718809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:47.981 [2024-10-08 18:37:36.718818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:47.981 [2024-10-08 18:37:36.718825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:47.981 [2024-10-08 18:37:36.718844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:47.981 [2024-10-08 18:37:36.718854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718862] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:47.981 [2024-10-08 18:37:36.718871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:47.981 [2024-10-08 18:37:36.718882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:47.981 [2024-10-08 18:37:36.718900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:47.981 [2024-10-08 18:37:36.718908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:47.981 [2024-10-08 18:37:36.718916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:47.981 [2024-10-08 18:37:36.718924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:47.981 [2024-10-08 18:37:36.718931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:47.981 [2024-10-08 18:37:36.718939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:47.981 [2024-10-08 18:37:36.718949] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:47.981 [2024-10-08 18:37:36.718959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.718969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:47.981 [2024-10-08 18:37:36.718977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:47.981 [2024-10-08 18:37:36.718986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:47.981 [2024-10-08 18:37:36.718994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:47.981 [2024-10-08 18:37:36.719001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:47.981 [2024-10-08 18:37:36.719009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:47.981 [2024-10-08 18:37:36.719016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:47.981 [2024-10-08 18:37:36.719023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:47.981 [2024-10-08 18:37:36.719030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:47.981 [2024-10-08 18:37:36.719037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:47.981 [2024-10-08 18:37:36.719074] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:47.981 [2024-10-08 18:37:36.719082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:47.981 [2024-10-08 18:37:36.719099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:47.981 [2024-10-08 18:37:36.719108] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:47.981 [2024-10-08 18:37:36.719118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:47.981 [2024-10-08 18:37:36.719126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.719134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:47.981 [2024-10-08 18:37:36.719142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.732 ms 00:26:47.981 [2024-10-08 18:37:36.719150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.744635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.744742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:47.981 [2024-10-08 18:37:36.744811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.431 ms 00:26:47.981 [2024-10-08 18:37:36.744834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.745055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.745077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:47.981 [2024-10-08 18:37:36.745156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:26:47.981 [2024-10-08 18:37:36.745199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.756546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.756592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:47.981 [2024-10-08 18:37:36.756605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.190 ms 00:26:47.981 [2024-10-08 18:37:36.756614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.756658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.756667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:47.981 [2024-10-08 18:37:36.756681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:47.981 [2024-10-08 18:37:36.756690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.757305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.757334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:47.981 [2024-10-08 18:37:36.757351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:26:47.981 [2024-10-08 18:37:36.757360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.757522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.757543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:47.981 [2024-10-08 18:37:36.757553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:26:47.981 [2024-10-08 18:37:36.757566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.981 [2024-10-08 18:37:36.763981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.981 [2024-10-08 18:37:36.764020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:47.981 [2024-10-08 18:37:36.764035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.388 ms 00:26:47.981 [2024-10-08 18:37:36.764044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.767561] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:47.982 [2024-10-08 18:37:36.767602] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:47.982 [2024-10-08 18:37:36.767614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.767623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:47.982 [2024-10-08 18:37:36.767640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:26:47.982 [2024-10-08 18:37:36.767648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.782943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.782998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:47.982 [2024-10-08 18:37:36.783017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.244 ms 00:26:47.982 [2024-10-08 18:37:36.783025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.785673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.785717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:47.982 [2024-10-08 18:37:36.785728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.596 ms 00:26:47.982 [2024-10-08 18:37:36.785736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.788338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.788380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:47.982 [2024-10-08 18:37:36.788392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:26:47.982 [2024-10-08 18:37:36.788409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.788779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.788805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:47.982 [2024-10-08 18:37:36.788816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:26:47.982 [2024-10-08 18:37:36.788826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.809976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.810053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:47.982 [2024-10-08 18:37:36.810067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.123 ms 00:26:47.982 [2024-10-08 18:37:36.810076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.818205] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:47.982 [2024-10-08 18:37:36.821407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.821459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:47.982 [2024-10-08 18:37:36.821480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.273 ms 00:26:47.982 [2024-10-08 18:37:36.821490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.821574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.821586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:47.982 [2024-10-08 18:37:36.821596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:47.982 [2024-10-08 18:37:36.821605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.822420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.822461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:47.982 [2024-10-08 18:37:36.822473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:26:47.982 [2024-10-08 18:37:36.822485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.822511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.822520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:47.982 [2024-10-08 18:37:36.822530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:47.982 [2024-10-08 18:37:36.822538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.982 [2024-10-08 18:37:36.822578] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:47.982 [2024-10-08 18:37:36.822593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.982 [2024-10-08 18:37:36.822606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:47.982 [2024-10-08 18:37:36.822617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:47.982 [2024-10-08 18:37:36.822628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.245 [2024-10-08 18:37:36.828306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.245 [2024-10-08 18:37:36.828472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:48.245 [2024-10-08 18:37:36.828491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.658 ms 00:26:48.245 [2024-10-08 18:37:36.828500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.245 [2024-10-08 18:37:36.828624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.245 [2024-10-08 18:37:36.828638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:48.245 [2024-10-08 18:37:36.828647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:48.245 [2024-10-08 18:37:36.828655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.245 [2024-10-08 18:37:36.829829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 128.454 ms, result 0 00:26:49.187  [2024-10-08T18:37:39.423Z] Copying: 10/1024 [MB] (10 MBps) [2024-10-08T18:37:40.417Z] Copying: 23/1024 [MB] (12 MBps) [2024-10-08T18:37:41.363Z] Copying: 33504/1048576 [kB] (9444 kBps) [2024-10-08T18:37:42.304Z] Copying: 44/1024 [MB] (12 MBps) [2024-10-08T18:37:43.258Z] Copying: 55224/1048576 [kB] (9308 kBps) [2024-10-08T18:37:44.202Z] Copying: 65000/1048576 [kB] (9776 kBps) [2024-10-08T18:37:45.146Z] Copying: 74104/1048576 [kB] (9104 kBps) [2024-10-08T18:37:46.089Z] Copying: 83016/1048576 [kB] (8912 kBps) [2024-10-08T18:37:47.031Z] Copying: 92808/1048576 [kB] (9792 kBps) [2024-10-08T18:37:48.415Z] Copying: 102/1024 [MB] (11 MBps) [2024-10-08T18:37:49.356Z] Copying: 116/1024 [MB] (14 MBps) [2024-10-08T18:37:50.298Z] Copying: 136/1024 [MB] (19 MBps) [2024-10-08T18:37:51.238Z] Copying: 149460/1048576 [kB] (9760 kBps) [2024-10-08T18:37:52.231Z] Copying: 155/1024 [MB] (10 MBps) [2024-10-08T18:37:53.175Z] Copying: 168856/1048576 [kB] (9136 kBps) [2024-10-08T18:37:54.117Z] Copying: 179/1024 [MB] (14 MBps) [2024-10-08T18:37:55.062Z] Copying: 189/1024 [MB] (10 MBps) [2024-10-08T18:37:56.450Z] Copying: 203464/1048576 [kB] (9696 kBps) [2024-10-08T18:37:57.022Z] Copying: 212688/1048576 [kB] (9224 kBps) [2024-10-08T18:37:58.410Z] Copying: 219/1024 [MB] (11 MBps) [2024-10-08T18:37:59.353Z] Copying: 237/1024 [MB] (18 MBps) [2024-10-08T18:38:00.295Z] Copying: 251/1024 [MB] (14 MBps) [2024-10-08T18:38:01.237Z] Copying: 269/1024 [MB] (17 MBps) [2024-10-08T18:38:02.178Z] Copying: 284/1024 [MB] (15 MBps) [2024-10-08T18:38:03.119Z] Copying: 299/1024 [MB] (15 MBps) [2024-10-08T18:38:04.062Z] Copying: 314/1024 [MB] (14 MBps) [2024-10-08T18:38:05.448Z] Copying: 325/1024 [MB] (11 MBps) [2024-10-08T18:38:06.019Z] Copying: 343172/1048576 [kB] (9592 kBps) [2024-10-08T18:38:07.406Z] Copying: 352952/1048576 [kB] (9780 kBps) [2024-10-08T18:38:08.347Z] Copying: 355/1024 [MB] (10 MBps) [2024-10-08T18:38:09.332Z] Copying: 373784/1048576 [kB] (10020 kBps) [2024-10-08T18:38:10.272Z] Copying: 375/1024 [MB] (10 MBps) [2024-10-08T18:38:11.219Z] Copying: 393556/1048576 [kB] (9520 kBps) [2024-10-08T18:38:12.163Z] Copying: 403224/1048576 [kB] (9668 kBps) [2024-10-08T18:38:13.104Z] Copying: 413132/1048576 [kB] (9908 kBps) [2024-10-08T18:38:14.048Z] Copying: 413/1024 [MB] (10 MBps) [2024-10-08T18:38:15.435Z] Copying: 433560/1048576 [kB] (10072 kBps) [2024-10-08T18:38:16.378Z] Copying: 443108/1048576 [kB] (9548 kBps) [2024-10-08T18:38:17.322Z] Copying: 452604/1048576 [kB] (9496 kBps) [2024-10-08T18:38:18.284Z] Copying: 454/1024 [MB] (12 MBps) [2024-10-08T18:38:19.232Z] Copying: 475096/1048576 [kB] (9644 kBps) [2024-10-08T18:38:20.177Z] Copying: 485204/1048576 [kB] (10108 kBps) [2024-10-08T18:38:21.118Z] Copying: 495312/1048576 [kB] (10108 kBps) [2024-10-08T18:38:22.062Z] Copying: 504968/1048576 [kB] (9656 kBps) [2024-10-08T18:38:23.042Z] Copying: 514976/1048576 [kB] (10008 kBps) [2024-10-08T18:38:24.498Z] Copying: 524472/1048576 [kB] (9496 kBps) [2024-10-08T18:38:25.071Z] Copying: 534104/1048576 [kB] (9632 kBps) [2024-10-08T18:38:26.017Z] Copying: 543796/1048576 [kB] (9692 kBps) [2024-10-08T18:38:27.404Z] Copying: 553772/1048576 [kB] (9976 kBps) [2024-10-08T18:38:28.346Z] Copying: 551/1024 [MB] (11 MBps) [2024-10-08T18:38:29.322Z] Copying: 574580/1048576 [kB] (9520 kBps) [2024-10-08T18:38:30.261Z] Copying: 571/1024 [MB] (10 MBps) [2024-10-08T18:38:31.204Z] Copying: 581/1024 [MB] (10 MBps) [2024-10-08T18:38:32.158Z] Copying: 605368/1048576 [kB] (9928 kBps) [2024-10-08T18:38:33.102Z] Copying: 601/1024 [MB] (10 MBps) [2024-10-08T18:38:34.049Z] Copying: 613/1024 [MB] (11 MBps) [2024-10-08T18:38:35.434Z] Copying: 624/1024 [MB] (10 MBps) [2024-10-08T18:38:36.377Z] Copying: 634/1024 [MB] (10 MBps) [2024-10-08T18:38:37.318Z] Copying: 645/1024 [MB] (10 MBps) [2024-10-08T18:38:38.262Z] Copying: 655/1024 [MB] (10 MBps) [2024-10-08T18:38:39.209Z] Copying: 681556/1048576 [kB] (9944 kBps) [2024-10-08T18:38:40.153Z] Copying: 677/1024 [MB] (12 MBps) [2024-10-08T18:38:41.098Z] Copying: 689/1024 [MB] (11 MBps) [2024-10-08T18:38:42.132Z] Copying: 701/1024 [MB] (12 MBps) [2024-10-08T18:38:43.080Z] Copying: 713/1024 [MB] (12 MBps) [2024-10-08T18:38:44.022Z] Copying: 725/1024 [MB] (11 MBps) [2024-10-08T18:38:45.405Z] Copying: 736/1024 [MB] (10 MBps) [2024-10-08T18:38:46.367Z] Copying: 750/1024 [MB] (13 MBps) [2024-10-08T18:38:47.306Z] Copying: 763/1024 [MB] (13 MBps) [2024-10-08T18:38:48.248Z] Copying: 777/1024 [MB] (14 MBps) [2024-10-08T18:38:49.191Z] Copying: 795/1024 [MB] (17 MBps) [2024-10-08T18:38:50.134Z] Copying: 817/1024 [MB] (22 MBps) [2024-10-08T18:38:51.085Z] Copying: 833/1024 [MB] (15 MBps) [2024-10-08T18:38:52.028Z] Copying: 848/1024 [MB] (15 MBps) [2024-10-08T18:38:53.423Z] Copying: 862/1024 [MB] (14 MBps) [2024-10-08T18:38:54.387Z] Copying: 882/1024 [MB] (19 MBps) [2024-10-08T18:38:55.330Z] Copying: 899/1024 [MB] (17 MBps) [2024-10-08T18:38:56.326Z] Copying: 915/1024 [MB] (16 MBps) [2024-10-08T18:38:57.265Z] Copying: 928/1024 [MB] (12 MBps) [2024-10-08T18:38:58.206Z] Copying: 941/1024 [MB] (13 MBps) [2024-10-08T18:38:59.172Z] Copying: 953/1024 [MB] (12 MBps) [2024-10-08T18:39:00.115Z] Copying: 972/1024 [MB] (18 MBps) [2024-10-08T18:39:01.059Z] Copying: 986/1024 [MB] (14 MBps) [2024-10-08T18:39:02.447Z] Copying: 1004/1024 [MB] (17 MBps) [2024-10-08T18:39:02.710Z] Copying: 1017/1024 [MB] (13 MBps) [2024-10-08T18:39:02.972Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-10-08 18:39:02.769453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.769510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:14.122 [2024-10-08 18:39:02.769524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:14.122 [2024-10-08 18:39:02.769532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.769558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:14.122 [2024-10-08 18:39:02.770019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.770037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:14.122 [2024-10-08 18:39:02.770046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:28:14.122 [2024-10-08 18:39:02.770054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.770270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.770284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:14.122 [2024-10-08 18:39:02.770292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:28:14.122 [2024-10-08 18:39:02.770300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.774080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.774105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:14.122 [2024-10-08 18:39:02.774114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.764 ms 00:28:14.122 [2024-10-08 18:39:02.774128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.780808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.780837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:14.122 [2024-10-08 18:39:02.780847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.663 ms 00:28:14.122 [2024-10-08 18:39:02.780857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.783103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.783472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:14.122 [2024-10-08 18:39:02.783487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.181 ms 00:28:14.122 [2024-10-08 18:39:02.783494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.788114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.788147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:14.122 [2024-10-08 18:39:02.788156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.587 ms 00:28:14.122 [2024-10-08 18:39:02.788163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.791043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.791075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:14.122 [2024-10-08 18:39:02.791084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:28:14.122 [2024-10-08 18:39:02.791100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.793275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.793375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:14.122 [2024-10-08 18:39:02.793430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:28:14.122 [2024-10-08 18:39:02.793452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.795806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.795941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:14.122 [2024-10-08 18:39:02.796006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.988 ms 00:28:14.122 [2024-10-08 18:39:02.796029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.797420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.797544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:14.122 [2024-10-08 18:39:02.797595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:28:14.122 [2024-10-08 18:39:02.797616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.799347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.122 [2024-10-08 18:39:02.799441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:14.122 [2024-10-08 18:39:02.799489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.669 ms 00:28:14.122 [2024-10-08 18:39:02.799510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.122 [2024-10-08 18:39:02.799545] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:14.122 [2024-10-08 18:39:02.799605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.122 [2024-10-08 18:39:02.799640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:14.122 [2024-10-08 18:39:02.799690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.799810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.799843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.799900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.799980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:14.122 [2024-10-08 18:39:02.800850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.800878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.800907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.800956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.800986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:14.123 [2024-10-08 18:39:02.801628] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:14.123 [2024-10-08 18:39:02.801644] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 586781a0-ef3b-4b7a-b872-155f1a622c6f 00:28:14.123 [2024-10-08 18:39:02.801652] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:14.123 [2024-10-08 18:39:02.801659] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:14.123 [2024-10-08 18:39:02.801666] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:14.123 [2024-10-08 18:39:02.801677] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:14.123 [2024-10-08 18:39:02.801684] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:14.123 [2024-10-08 18:39:02.801692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:14.123 [2024-10-08 18:39:02.801699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:14.123 [2024-10-08 18:39:02.801705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:14.123 [2024-10-08 18:39:02.801711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:14.123 [2024-10-08 18:39:02.801719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.123 [2024-10-08 18:39:02.801727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:14.123 [2024-10-08 18:39:02.801740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.175 ms 00:28:14.123 [2024-10-08 18:39:02.801747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.123 [2024-10-08 18:39:02.803272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.123 [2024-10-08 18:39:02.803358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:14.123 [2024-10-08 18:39:02.803403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.360 ms 00:28:14.123 [2024-10-08 18:39:02.803443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.123 [2024-10-08 18:39:02.803564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:14.123 [2024-10-08 18:39:02.803610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:14.124 [2024-10-08 18:39:02.803644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:14.124 [2024-10-08 18:39:02.803662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.807979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.808082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:14.124 [2024-10-08 18:39:02.808130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.808159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.808249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.808292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:14.124 [2024-10-08 18:39:02.808313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.808332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.808384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.808407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:14.124 [2024-10-08 18:39:02.808462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.808484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.808511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.808558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:14.124 [2024-10-08 18:39:02.808580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.808589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.817133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.817262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:14.124 [2024-10-08 18:39:02.817314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.817366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.824136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.824260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:14.124 [2024-10-08 18:39:02.824311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.824336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.824438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.824467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:14.124 [2024-10-08 18:39:02.824512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.824582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.824629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.824676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:14.124 [2024-10-08 18:39:02.824732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.824771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.824874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.824904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:14.124 [2024-10-08 18:39:02.824927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.824949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.825037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.825065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:14.124 [2024-10-08 18:39:02.825088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.825114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.825202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.825230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:14.124 [2024-10-08 18:39:02.825253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.825274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.825388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:14.124 [2024-10-08 18:39:02.825418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:14.124 [2024-10-08 18:39:02.825470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:14.124 [2024-10-08 18:39:02.825499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:14.124 [2024-10-08 18:39:02.825628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.149 ms, result 0 00:28:14.386 00:28:14.386 00:28:14.386 18:39:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:16.931 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:28:16.931 Process with pid 89813 is not found 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 89813 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89813 ']' 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 89813 00:28:16.931 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (89813) - No such process 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 89813 is not found' 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:28:16.931 Remove shared memory files 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:16.931 ************************************ 00:28:16.931 END TEST ftl_dirty_shutdown 00:28:16.931 ************************************ 00:28:16.931 00:28:16.931 real 6m15.401s 00:28:16.931 user 7m20.164s 00:28:16.931 sys 0m37.523s 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.931 18:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:16.931 18:39:05 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:16.931 18:39:05 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:28:16.931 18:39:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.931 18:39:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:17.193 ************************************ 00:28:17.193 START TEST ftl_upgrade_shutdown 00:28:17.193 ************************************ 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:28:17.193 * Looking for test storage... 00:28:17.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:28:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.193 --rc genhtml_branch_coverage=1 00:28:17.193 --rc genhtml_function_coverage=1 00:28:17.193 --rc genhtml_legend=1 00:28:17.193 --rc geninfo_all_blocks=1 00:28:17.193 --rc geninfo_unexecuted_blocks=1 00:28:17.193 00:28:17.193 ' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:28:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.193 --rc genhtml_branch_coverage=1 00:28:17.193 --rc genhtml_function_coverage=1 00:28:17.193 --rc genhtml_legend=1 00:28:17.193 --rc geninfo_all_blocks=1 00:28:17.193 --rc geninfo_unexecuted_blocks=1 00:28:17.193 00:28:17.193 ' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:28:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.193 --rc genhtml_branch_coverage=1 00:28:17.193 --rc genhtml_function_coverage=1 00:28:17.193 --rc genhtml_legend=1 00:28:17.193 --rc geninfo_all_blocks=1 00:28:17.193 --rc geninfo_unexecuted_blocks=1 00:28:17.193 00:28:17.193 ' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:28:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:17.193 --rc genhtml_branch_coverage=1 00:28:17.193 --rc genhtml_function_coverage=1 00:28:17.193 --rc genhtml_legend=1 00:28:17.193 --rc geninfo_all_blocks=1 00:28:17.193 --rc geninfo_unexecuted_blocks=1 00:28:17.193 00:28:17.193 ' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:28:17.193 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=93787 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 93787 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93787 ']' 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:17.194 18:39:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:17.194 [2024-10-08 18:39:06.032485] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:17.194 [2024-10-08 18:39:06.032768] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93787 ] 00:28:17.454 [2024-10-08 18:39:06.163611] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:17.454 [2024-10-08 18:39:06.181383] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.454 [2024-10-08 18:39:06.218706] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:28:18.025 18:39:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:28:18.595 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:18.595 { 00:28:18.595 "name": "basen1", 00:28:18.595 "aliases": [ 00:28:18.595 "7c258f7f-a584-4bd9-a208-28aa9b95787d" 00:28:18.595 ], 00:28:18.595 "product_name": "NVMe disk", 00:28:18.595 "block_size": 4096, 00:28:18.595 "num_blocks": 1310720, 00:28:18.595 "uuid": "7c258f7f-a584-4bd9-a208-28aa9b95787d", 00:28:18.595 "numa_id": -1, 00:28:18.595 "assigned_rate_limits": { 00:28:18.595 "rw_ios_per_sec": 0, 00:28:18.595 "rw_mbytes_per_sec": 0, 00:28:18.595 "r_mbytes_per_sec": 0, 00:28:18.595 "w_mbytes_per_sec": 0 00:28:18.595 }, 00:28:18.596 "claimed": true, 00:28:18.596 "claim_type": "read_many_write_one", 00:28:18.596 "zoned": false, 00:28:18.596 "supported_io_types": { 00:28:18.596 "read": true, 00:28:18.596 "write": true, 00:28:18.596 "unmap": true, 00:28:18.596 "flush": true, 00:28:18.596 "reset": true, 00:28:18.596 "nvme_admin": true, 00:28:18.596 "nvme_io": true, 00:28:18.596 "nvme_io_md": false, 00:28:18.596 "write_zeroes": true, 00:28:18.596 "zcopy": false, 00:28:18.596 "get_zone_info": false, 00:28:18.596 "zone_management": false, 00:28:18.596 "zone_append": false, 00:28:18.596 "compare": true, 00:28:18.596 "compare_and_write": false, 00:28:18.596 "abort": true, 00:28:18.596 "seek_hole": false, 00:28:18.596 "seek_data": false, 00:28:18.596 "copy": true, 00:28:18.596 "nvme_iov_md": false 00:28:18.596 }, 00:28:18.596 "driver_specific": { 00:28:18.596 "nvme": [ 00:28:18.596 { 00:28:18.596 "pci_address": "0000:00:11.0", 00:28:18.596 "trid": { 00:28:18.596 "trtype": "PCIe", 00:28:18.596 "traddr": "0000:00:11.0" 00:28:18.596 }, 00:28:18.596 "ctrlr_data": { 00:28:18.596 "cntlid": 0, 00:28:18.596 "vendor_id": "0x1b36", 00:28:18.596 "model_number": "QEMU NVMe Ctrl", 00:28:18.596 "serial_number": "12341", 00:28:18.596 "firmware_revision": "8.0.0", 00:28:18.596 "subnqn": "nqn.2019-08.org.qemu:12341", 00:28:18.596 "oacs": { 00:28:18.596 "security": 0, 00:28:18.596 "format": 1, 00:28:18.596 "firmware": 0, 00:28:18.596 "ns_manage": 1 00:28:18.596 }, 00:28:18.596 "multi_ctrlr": false, 00:28:18.596 "ana_reporting": false 00:28:18.596 }, 00:28:18.596 "vs": { 00:28:18.596 "nvme_version": "1.4" 00:28:18.596 }, 00:28:18.596 "ns_data": { 00:28:18.596 "id": 1, 00:28:18.596 "can_share": false 00:28:18.596 } 00:28:18.596 } 00:28:18.596 ], 00:28:18.596 "mp_policy": "active_passive" 00:28:18.596 } 00:28:18.596 } 00:28:18.596 ]' 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:18.596 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:18.857 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=08a96236-a329-4295-b9ef-287915ce96d5 00:28:18.857 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:28:18.857 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08a96236-a329-4295-b9ef-287915ce96d5 00:28:19.144 18:39:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1ff921e5-605b-4f63-af17-7bc37c28b86a 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1ff921e5-605b-4f63-af17-7bc37c28b86a 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5ca248ab-f0f8-455c-8ab9-40f33d39adc2 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5ca248ab-f0f8-455c-8ab9-40f33d39adc2 ]] 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5ca248ab-f0f8-455c-8ab9-40f33d39adc2 5120 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5ca248ab-f0f8-455c-8ab9-40f33d39adc2 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5ca248ab-f0f8-455c-8ab9-40f33d39adc2 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=5ca248ab-f0f8-455c-8ab9-40f33d39adc2 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:28:19.408 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5ca248ab-f0f8-455c-8ab9-40f33d39adc2 00:28:19.671 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:28:19.671 { 00:28:19.671 "name": "5ca248ab-f0f8-455c-8ab9-40f33d39adc2", 00:28:19.671 "aliases": [ 00:28:19.671 "lvs/basen1p0" 00:28:19.671 ], 00:28:19.671 "product_name": "Logical Volume", 00:28:19.671 "block_size": 4096, 00:28:19.671 "num_blocks": 5242880, 00:28:19.671 "uuid": "5ca248ab-f0f8-455c-8ab9-40f33d39adc2", 00:28:19.671 "assigned_rate_limits": { 00:28:19.671 "rw_ios_per_sec": 0, 00:28:19.671 "rw_mbytes_per_sec": 0, 00:28:19.671 "r_mbytes_per_sec": 0, 00:28:19.671 "w_mbytes_per_sec": 0 00:28:19.671 }, 00:28:19.671 "claimed": false, 00:28:19.671 "zoned": false, 00:28:19.671 "supported_io_types": { 00:28:19.671 "read": true, 00:28:19.671 "write": true, 00:28:19.671 "unmap": true, 00:28:19.672 "flush": false, 00:28:19.672 "reset": true, 00:28:19.672 "nvme_admin": false, 00:28:19.672 "nvme_io": false, 00:28:19.672 "nvme_io_md": false, 00:28:19.672 "write_zeroes": true, 00:28:19.672 "zcopy": false, 00:28:19.672 "get_zone_info": false, 00:28:19.672 "zone_management": false, 00:28:19.672 "zone_append": false, 00:28:19.672 "compare": false, 00:28:19.672 "compare_and_write": false, 00:28:19.672 "abort": false, 00:28:19.672 "seek_hole": true, 00:28:19.672 "seek_data": true, 00:28:19.672 "copy": false, 00:28:19.672 "nvme_iov_md": false 00:28:19.672 }, 00:28:19.672 "driver_specific": { 00:28:19.672 "lvol": { 00:28:19.672 "lvol_store_uuid": "1ff921e5-605b-4f63-af17-7bc37c28b86a", 00:28:19.672 "base_bdev": "basen1", 00:28:19.672 "thin_provision": true, 00:28:19.672 "num_allocated_clusters": 0, 00:28:19.672 "snapshot": false, 00:28:19.672 "clone": false, 00:28:19.672 "esnap_clone": false 00:28:19.672 } 00:28:19.672 } 00:28:19.672 } 00:28:19.672 ]' 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:28:19.672 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:28:19.933 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:28:19.933 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:28:19.933 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:28:20.194 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:28:20.194 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:28:20.194 18:39:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5ca248ab-f0f8-455c-8ab9-40f33d39adc2 -c cachen1p0 --l2p_dram_limit 2 00:28:20.455 [2024-10-08 18:39:09.162137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.162339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:20.455 [2024-10-08 18:39:09.162363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:20.455 [2024-10-08 18:39:09.162377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.162442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.162452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:20.455 [2024-10-08 18:39:09.162465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:20.455 [2024-10-08 18:39:09.162475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.162497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:20.455 [2024-10-08 18:39:09.162825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:20.455 [2024-10-08 18:39:09.162849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.162863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:20.455 [2024-10-08 18:39:09.162874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.358 ms 00:28:20.455 [2024-10-08 18:39:09.162883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.162951] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d9582dc7-347d-423d-bccf-d9158cc2fec8 00:28:20.455 [2024-10-08 18:39:09.164141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.164184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:28:20.455 [2024-10-08 18:39:09.164197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:20.455 [2024-10-08 18:39:09.164210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.169691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.169860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:20.455 [2024-10-08 18:39:09.169876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.434 ms 00:28:20.455 [2024-10-08 18:39:09.169888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.169927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.169938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:20.455 [2024-10-08 18:39:09.169952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:20.455 [2024-10-08 18:39:09.169961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.170009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.170022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:20.455 [2024-10-08 18:39:09.170030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:20.455 [2024-10-08 18:39:09.170040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.170062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:20.455 [2024-10-08 18:39:09.171630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.171657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:20.455 [2024-10-08 18:39:09.171675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.571 ms 00:28:20.455 [2024-10-08 18:39:09.171688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.171727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.455 [2024-10-08 18:39:09.171738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:20.455 [2024-10-08 18:39:09.171762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:20.455 [2024-10-08 18:39:09.171771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.455 [2024-10-08 18:39:09.171796] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:28:20.455 [2024-10-08 18:39:09.171936] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:20.455 [2024-10-08 18:39:09.171956] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:20.455 [2024-10-08 18:39:09.171968] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:20.455 [2024-10-08 18:39:09.171980] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:20.455 [2024-10-08 18:39:09.171989] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:20.455 [2024-10-08 18:39:09.172000] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:20.456 [2024-10-08 18:39:09.172011] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:20.456 [2024-10-08 18:39:09.172029] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:20.456 [2024-10-08 18:39:09.172045] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:20.456 [2024-10-08 18:39:09.172062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.456 [2024-10-08 18:39:09.172070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:20.456 [2024-10-08 18:39:09.172083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:28:20.456 [2024-10-08 18:39:09.172090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.456 [2024-10-08 18:39:09.172177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.456 [2024-10-08 18:39:09.172186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:20.456 [2024-10-08 18:39:09.172195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:28:20.456 [2024-10-08 18:39:09.172202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.456 [2024-10-08 18:39:09.172302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:20.456 [2024-10-08 18:39:09.172318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:20.456 [2024-10-08 18:39:09.172334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:20.456 [2024-10-08 18:39:09.172384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:20.456 [2024-10-08 18:39:09.172411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:20.456 [2024-10-08 18:39:09.172426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:20.456 [2024-10-08 18:39:09.172438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:20.456 [2024-10-08 18:39:09.172456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:20.456 [2024-10-08 18:39:09.172467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:20.456 [2024-10-08 18:39:09.172486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:20.456 [2024-10-08 18:39:09.172494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:20.456 [2024-10-08 18:39:09.172511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:20.456 [2024-10-08 18:39:09.172520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:20.456 [2024-10-08 18:39:09.172539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:20.456 [2024-10-08 18:39:09.172564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:20.456 [2024-10-08 18:39:09.172593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:20.456 [2024-10-08 18:39:09.172624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:20.456 [2024-10-08 18:39:09.172651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:20.456 [2024-10-08 18:39:09.172676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:20.456 [2024-10-08 18:39:09.172703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:20.456 [2024-10-08 18:39:09.172728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:20.456 [2024-10-08 18:39:09.172737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172745] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:20.456 [2024-10-08 18:39:09.172776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:20.456 [2024-10-08 18:39:09.172786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:20.456 [2024-10-08 18:39:09.172808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:20.456 [2024-10-08 18:39:09.172818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:20.456 [2024-10-08 18:39:09.172826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:20.456 [2024-10-08 18:39:09.172837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:20.456 [2024-10-08 18:39:09.172846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:20.456 [2024-10-08 18:39:09.172856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:20.456 [2024-10-08 18:39:09.172867] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:20.456 [2024-10-08 18:39:09.172881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:20.456 [2024-10-08 18:39:09.172901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:20.456 [2024-10-08 18:39:09.172927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:20.456 [2024-10-08 18:39:09.172939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:20.456 [2024-10-08 18:39:09.172948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:20.456 [2024-10-08 18:39:09.172958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.172994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.173003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.173013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:20.456 [2024-10-08 18:39:09.173021] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:20.456 [2024-10-08 18:39:09.173034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.173043] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:20.456 [2024-10-08 18:39:09.173053] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:20.456 [2024-10-08 18:39:09.173061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:20.456 [2024-10-08 18:39:09.173071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:20.456 [2024-10-08 18:39:09.173080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.456 [2024-10-08 18:39:09.173091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:20.456 [2024-10-08 18:39:09.173100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:28:20.456 [2024-10-08 18:39:09.173112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.456 [2024-10-08 18:39:09.173154] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:20.456 [2024-10-08 18:39:09.173166] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:23.755 [2024-10-08 18:39:11.999265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:11.999328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:23.755 [2024-10-08 18:39:11.999351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2826.098 ms 00:28:23.755 [2024-10-08 18:39:11.999361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.007794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.007841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:23.755 [2024-10-08 18:39:12.007854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.355 ms 00:28:23.755 [2024-10-08 18:39:12.007867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.007941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.007954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:23.755 [2024-10-08 18:39:12.007965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:23.755 [2024-10-08 18:39:12.007974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.016058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.016103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:23.755 [2024-10-08 18:39:12.016113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.037 ms 00:28:23.755 [2024-10-08 18:39:12.016127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.016158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.016171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:23.755 [2024-10-08 18:39:12.016185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.755 [2024-10-08 18:39:12.016193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.016525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.016545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:23.755 [2024-10-08 18:39:12.016553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:28:23.755 [2024-10-08 18:39:12.016565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.016603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.016613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:23.755 [2024-10-08 18:39:12.016625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:28:23.755 [2024-10-08 18:39:12.016634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.032560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.032627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:23.755 [2024-10-08 18:39:12.032648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.904 ms 00:28:23.755 [2024-10-08 18:39:12.032665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.044635] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:23.755 [2024-10-08 18:39:12.045512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.045542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:23.755 [2024-10-08 18:39:12.045554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.669 ms 00:28:23.755 [2024-10-08 18:39:12.045562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.057780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.057817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:28:23.755 [2024-10-08 18:39:12.057832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.188 ms 00:28:23.755 [2024-10-08 18:39:12.057843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.057913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.057923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:23.755 [2024-10-08 18:39:12.057933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:28:23.755 [2024-10-08 18:39:12.057941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.060505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.060537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:28:23.755 [2024-10-08 18:39:12.060549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.529 ms 00:28:23.755 [2024-10-08 18:39:12.060557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.755 [2024-10-08 18:39:12.063079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.755 [2024-10-08 18:39:12.063109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:28:23.756 [2024-10-08 18:39:12.063122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.495 ms 00:28:23.756 [2024-10-08 18:39:12.063130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.063539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.063560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:23.756 [2024-10-08 18:39:12.063574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:28:23.756 [2024-10-08 18:39:12.063583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.092123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.092202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:28:23.756 [2024-10-08 18:39:12.092233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.507 ms 00:28:23.756 [2024-10-08 18:39:12.092253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.097089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.097355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:28:23.756 [2024-10-08 18:39:12.097407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.703 ms 00:28:23.756 [2024-10-08 18:39:12.097425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.101813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.101955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:28:23.756 [2024-10-08 18:39:12.102043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.288 ms 00:28:23.756 [2024-10-08 18:39:12.102087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.107167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.107312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:23.756 [2024-10-08 18:39:12.107414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.973 ms 00:28:23.756 [2024-10-08 18:39:12.107451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.107557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.107643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:23.756 [2024-10-08 18:39:12.107719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:23.756 [2024-10-08 18:39:12.107784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.107947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.756 [2024-10-08 18:39:12.108001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:23.756 [2024-10-08 18:39:12.108089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:28:23.756 [2024-10-08 18:39:12.108169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.756 [2024-10-08 18:39:12.109422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2946.710 ms, result 0 00:28:23.756 { 00:28:23.756 "name": "ftl", 00:28:23.756 "uuid": "d9582dc7-347d-423d-bccf-d9158cc2fec8" 00:28:23.756 } 00:28:23.756 18:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:28:23.756 [2024-10-08 18:39:12.326288] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.756 18:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:28:23.756 18:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:28:24.017 [2024-10-08 18:39:12.726694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:24.017 18:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:28:24.278 [2024-10-08 18:39:12.923036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:24.278 18:39:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:28:24.539 Fill FTL, iteration 1 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:24.539 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=93898 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 93898 /var/tmp/spdk.tgt.sock 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 93898 ']' 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:24.540 18:39:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:24.540 [2024-10-08 18:39:13.347944] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:24.540 [2024-10-08 18:39:13.348083] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93898 ] 00:28:24.800 [2024-10-08 18:39:13.478202] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:24.800 [2024-10-08 18:39:13.491128] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.800 [2024-10-08 18:39:13.535533] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.370 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:25.370 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:28:25.370 18:39:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:28:25.630 ftln1 00:28:25.630 18:39:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:28:25.630 18:39:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 93898 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93898 ']' 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93898 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93898 00:28:25.889 killing process with pid 93898 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93898' 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93898 00:28:25.889 18:39:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93898 00:28:26.460 18:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:28:26.460 18:39:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:28:26.460 [2024-10-08 18:39:15.100486] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:26.460 [2024-10-08 18:39:15.100628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93934 ] 00:28:26.460 [2024-10-08 18:39:15.230333] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:26.460 [2024-10-08 18:39:15.248781] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.460 [2024-10-08 18:39:15.292614] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.844  [2024-10-08T18:39:17.680Z] Copying: 193/1024 [MB] (193 MBps) [2024-10-08T18:39:18.612Z] Copying: 415/1024 [MB] (222 MBps) [2024-10-08T18:39:19.583Z] Copying: 685/1024 [MB] (270 MBps) [2024-10-08T18:39:20.154Z] Copying: 950/1024 [MB] (265 MBps) [2024-10-08T18:39:20.154Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:28:31.304 00:28:31.304 Calculate MD5 checksum, iteration 1 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:31.304 18:39:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:31.304 [2024-10-08 18:39:20.128933] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:31.305 [2024-10-08 18:39:20.129054] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93989 ] 00:28:31.565 [2024-10-08 18:39:20.254530] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:31.565 [2024-10-08 18:39:20.274893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.565 [2024-10-08 18:39:20.309448] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.946  [2024-10-08T18:39:22.053Z] Copying: 639/1024 [MB] (639 MBps) [2024-10-08T18:39:22.364Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:28:33.514 00:28:33.514 18:39:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:28:33.514 18:39:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fda29888a9ca0130ef4ce54320f394c8 00:28:36.058 Fill FTL, iteration 2 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:36.058 18:39:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:28:36.058 [2024-10-08 18:39:24.474551] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:36.058 [2024-10-08 18:39:24.474902] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94040 ] 00:28:36.058 [2024-10-08 18:39:24.605738] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:36.058 [2024-10-08 18:39:24.626738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.058 [2024-10-08 18:39:24.661941] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.001  [2024-10-08T18:39:27.230Z] Copying: 171/1024 [MB] (171 MBps) [2024-10-08T18:39:28.175Z] Copying: 376/1024 [MB] (205 MBps) [2024-10-08T18:39:29.124Z] Copying: 639/1024 [MB] (263 MBps) [2024-10-08T18:39:29.693Z] Copying: 906/1024 [MB] (267 MBps) [2024-10-08T18:39:29.693Z] Copying: 1024/1024 [MB] (average 224 MBps) 00:28:40.843 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:28:40.843 Calculate MD5 checksum, iteration 2 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:40.843 18:39:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:40.843 [2024-10-08 18:39:29.665879] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:28:40.843 [2024-10-08 18:39:29.666157] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94094 ] 00:28:41.104 [2024-10-08 18:39:29.794801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:28:41.104 [2024-10-08 18:39:29.815988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.104 [2024-10-08 18:39:29.850787] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.483  [2024-10-08T18:39:31.901Z] Copying: 690/1024 [MB] (690 MBps) [2024-10-08T18:39:32.472Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:28:43.622 00:28:43.622 18:39:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:28:43.622 18:39:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4802f191ba18dbb468e698c73a274a51 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:46.155 [2024-10-08 18:39:34.645441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.155 [2024-10-08 18:39:34.645613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:46.155 [2024-10-08 18:39:34.645633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:46.155 [2024-10-08 18:39:34.645642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.155 [2024-10-08 18:39:34.645672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.155 [2024-10-08 18:39:34.645682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:46.155 [2024-10-08 18:39:34.645694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:46.155 [2024-10-08 18:39:34.645703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.155 [2024-10-08 18:39:34.645722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.155 [2024-10-08 18:39:34.645730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:46.155 [2024-10-08 18:39:34.645738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:46.155 [2024-10-08 18:39:34.645746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.155 [2024-10-08 18:39:34.645829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.372 ms, result 0 00:28:46.155 true 00:28:46.155 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:46.155 { 00:28:46.155 "name": "ftl", 00:28:46.155 "properties": [ 00:28:46.155 { 00:28:46.155 "name": "superblock_version", 00:28:46.155 "value": 5, 00:28:46.155 "read-only": true 00:28:46.155 }, 00:28:46.155 { 00:28:46.155 "name": "base_device", 00:28:46.155 "bands": [ 00:28:46.155 { 00:28:46.155 "id": 0, 00:28:46.155 "state": "FREE", 00:28:46.155 "validity": 0.0 00:28:46.155 }, 00:28:46.155 { 00:28:46.155 "id": 1, 00:28:46.155 "state": "FREE", 00:28:46.155 "validity": 0.0 00:28:46.155 }, 00:28:46.155 { 00:28:46.155 "id": 2, 00:28:46.155 "state": "FREE", 00:28:46.155 "validity": 0.0 00:28:46.155 }, 00:28:46.155 { 00:28:46.155 "id": 3, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 4, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 5, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 6, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 7, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 8, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 9, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 10, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 11, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 12, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 13, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 14, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 15, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 16, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 17, 00:28:46.156 "state": "FREE", 00:28:46.156 "validity": 0.0 00:28:46.156 } 00:28:46.156 ], 00:28:46.156 "read-only": true 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "name": "cache_device", 00:28:46.156 "type": "bdev", 00:28:46.156 "chunks": [ 00:28:46.156 { 00:28:46.156 "id": 0, 00:28:46.156 "state": "INACTIVE", 00:28:46.156 "utilization": 0.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 1, 00:28:46.156 "state": "CLOSED", 00:28:46.156 "utilization": 1.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 2, 00:28:46.156 "state": "CLOSED", 00:28:46.156 "utilization": 1.0 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 3, 00:28:46.156 "state": "OPEN", 00:28:46.156 "utilization": 0.001953125 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "id": 4, 00:28:46.156 "state": "OPEN", 00:28:46.156 "utilization": 0.0 00:28:46.156 } 00:28:46.156 ], 00:28:46.156 "read-only": true 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "name": "verbose_mode", 00:28:46.156 "value": true, 00:28:46.156 "unit": "", 00:28:46.156 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:46.156 }, 00:28:46.156 { 00:28:46.156 "name": "prep_upgrade_on_shutdown", 00:28:46.156 "value": false, 00:28:46.156 "unit": "", 00:28:46.156 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:46.156 } 00:28:46.156 ] 00:28:46.156 } 00:28:46.156 18:39:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:46.414 [2024-10-08 18:39:35.065407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.414 [2024-10-08 18:39:35.065456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:46.414 [2024-10-08 18:39:35.065469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:46.414 [2024-10-08 18:39:35.065476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.414 [2024-10-08 18:39:35.065499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.414 [2024-10-08 18:39:35.065506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:46.414 [2024-10-08 18:39:35.065514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:46.414 [2024-10-08 18:39:35.065521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.414 [2024-10-08 18:39:35.065540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.414 [2024-10-08 18:39:35.065548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:46.414 [2024-10-08 18:39:35.065555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:46.414 [2024-10-08 18:39:35.065562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.414 [2024-10-08 18:39:35.065615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.202 ms, result 0 00:28:46.414 true 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:46.414 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:46.671 [2024-10-08 18:39:35.441854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.671 [2024-10-08 18:39:35.441903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:46.671 [2024-10-08 18:39:35.441915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:46.671 [2024-10-08 18:39:35.441923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.671 [2024-10-08 18:39:35.441944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.671 [2024-10-08 18:39:35.441953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:46.671 [2024-10-08 18:39:35.441961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:46.671 [2024-10-08 18:39:35.441968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.671 [2024-10-08 18:39:35.441987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:46.671 [2024-10-08 18:39:35.441995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:46.671 [2024-10-08 18:39:35.442002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:46.671 [2024-10-08 18:39:35.442008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:46.671 [2024-10-08 18:39:35.442061] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.197 ms, result 0 00:28:46.671 true 00:28:46.671 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:46.930 { 00:28:46.930 "name": "ftl", 00:28:46.930 "properties": [ 00:28:46.930 { 00:28:46.930 "name": "superblock_version", 00:28:46.930 "value": 5, 00:28:46.930 "read-only": true 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "name": "base_device", 00:28:46.930 "bands": [ 00:28:46.930 { 00:28:46.930 "id": 0, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 1, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 2, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 3, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 4, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 5, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 6, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 7, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 8, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 9, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 10, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 11, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 12, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 13, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 14, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 15, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 16, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 17, 00:28:46.930 "state": "FREE", 00:28:46.930 "validity": 0.0 00:28:46.930 } 00:28:46.930 ], 00:28:46.930 "read-only": true 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "name": "cache_device", 00:28:46.930 "type": "bdev", 00:28:46.930 "chunks": [ 00:28:46.930 { 00:28:46.930 "id": 0, 00:28:46.930 "state": "INACTIVE", 00:28:46.930 "utilization": 0.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 1, 00:28:46.930 "state": "CLOSED", 00:28:46.930 "utilization": 1.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 2, 00:28:46.930 "state": "CLOSED", 00:28:46.930 "utilization": 1.0 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 3, 00:28:46.930 "state": "OPEN", 00:28:46.930 "utilization": 0.001953125 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "id": 4, 00:28:46.930 "state": "OPEN", 00:28:46.930 "utilization": 0.0 00:28:46.930 } 00:28:46.930 ], 00:28:46.930 "read-only": true 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "name": "verbose_mode", 00:28:46.930 "value": true, 00:28:46.930 "unit": "", 00:28:46.930 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:46.930 }, 00:28:46.930 { 00:28:46.930 "name": "prep_upgrade_on_shutdown", 00:28:46.930 "value": true, 00:28:46.930 "unit": "", 00:28:46.930 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:46.930 } 00:28:46.930 ] 00:28:46.930 } 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 93787 ]] 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 93787 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 93787 ']' 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 93787 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93787 00:28:46.930 killing process with pid 93787 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93787' 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 93787 00:28:46.930 18:39:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 93787 00:28:47.188 [2024-10-08 18:39:35.779204] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:47.188 [2024-10-08 18:39:35.783045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.188 [2024-10-08 18:39:35.783076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:47.188 [2024-10-08 18:39:35.783088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:47.188 [2024-10-08 18:39:35.783094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:47.188 [2024-10-08 18:39:35.783111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:47.188 [2024-10-08 18:39:35.783503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:47.188 [2024-10-08 18:39:35.783523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:47.188 [2024-10-08 18:39:35.783531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.382 ms 00:28:47.188 [2024-10-08 18:39:35.783538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.308 [2024-10-08 18:39:43.758222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.308 [2024-10-08 18:39:43.758285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:55.308 [2024-10-08 18:39:43.758306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7974.628 ms 00:28:55.308 [2024-10-08 18:39:43.758318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.308 [2024-10-08 18:39:43.759777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.308 [2024-10-08 18:39:43.759803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:55.308 [2024-10-08 18:39:43.759812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.443 ms 00:28:55.308 [2024-10-08 18:39:43.759820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.308 [2024-10-08 18:39:43.760976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.308 [2024-10-08 18:39:43.761118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:55.308 [2024-10-08 18:39:43.761134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.130 ms 00:28:55.308 [2024-10-08 18:39:43.761142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.308 [2024-10-08 18:39:43.762413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.308 [2024-10-08 18:39:43.762442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:55.308 [2024-10-08 18:39:43.762451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:28:55.308 [2024-10-08 18:39:43.762458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.308 [2024-10-08 18:39:43.764188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.308 [2024-10-08 18:39:43.764309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:55.308 [2024-10-08 18:39:43.764324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.701 ms 00:28:55.308 [2024-10-08 18:39:43.764333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.764391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.764400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:55.309 [2024-10-08 18:39:43.764413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:55.309 [2024-10-08 18:39:43.764425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.765251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.765280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:55.309 [2024-10-08 18:39:43.765290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.810 ms 00:28:55.309 [2024-10-08 18:39:43.765296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.766389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.766419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:55.309 [2024-10-08 18:39:43.766427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.066 ms 00:28:55.309 [2024-10-08 18:39:43.766434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.767386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.767505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:55.309 [2024-10-08 18:39:43.767518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.924 ms 00:28:55.309 [2024-10-08 18:39:43.767525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.768295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.768322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:55.309 [2024-10-08 18:39:43.768330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.718 ms 00:28:55.309 [2024-10-08 18:39:43.768338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.768364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:55.309 [2024-10-08 18:39:43.768377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:55.309 [2024-10-08 18:39:43.768387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:55.309 [2024-10-08 18:39:43.768395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:55.309 [2024-10-08 18:39:43.768403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:55.309 [2024-10-08 18:39:43.768520] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:55.309 [2024-10-08 18:39:43.768528] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d9582dc7-347d-423d-bccf-d9158cc2fec8 00:28:55.309 [2024-10-08 18:39:43.768535] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:55.309 [2024-10-08 18:39:43.768542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:55.309 [2024-10-08 18:39:43.768549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:55.309 [2024-10-08 18:39:43.768556] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:55.309 [2024-10-08 18:39:43.768564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:55.309 [2024-10-08 18:39:43.768580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:55.309 [2024-10-08 18:39:43.768587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:55.309 [2024-10-08 18:39:43.768593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:55.309 [2024-10-08 18:39:43.768600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:55.309 [2024-10-08 18:39:43.768607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.768615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:55.309 [2024-10-08 18:39:43.768623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.244 ms 00:28:55.309 [2024-10-08 18:39:43.768630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.770064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.770081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:55.309 [2024-10-08 18:39:43.770089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.420 ms 00:28:55.309 [2024-10-08 18:39:43.770102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.770171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:55.309 [2024-10-08 18:39:43.770179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:55.309 [2024-10-08 18:39:43.770186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:55.309 [2024-10-08 18:39:43.770193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.775131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.775162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:55.309 [2024-10-08 18:39:43.775171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.775183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.775208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.775216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:55.309 [2024-10-08 18:39:43.775223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.775231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.775288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.775298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:55.309 [2024-10-08 18:39:43.775305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.775312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.775331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.775339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:55.309 [2024-10-08 18:39:43.775346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.775353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.783747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.783791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:55.309 [2024-10-08 18:39:43.783808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.783819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.790835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.790873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:55.309 [2024-10-08 18:39:43.790883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.790890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.790933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.790942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:55.309 [2024-10-08 18:39:43.790950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.790957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.791016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:55.309 [2024-10-08 18:39:43.791024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.791031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.791106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:55.309 [2024-10-08 18:39:43.791113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.791121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.791160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:55.309 [2024-10-08 18:39:43.791168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.791179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.791224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:55.309 [2024-10-08 18:39:43.791231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.791238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:55.309 [2024-10-08 18:39:43.791290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:55.309 [2024-10-08 18:39:43.791298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:55.309 [2024-10-08 18:39:43.791305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:55.309 [2024-10-08 18:39:43.791413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8008.327 ms, result 0 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:07.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94272 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94272 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 94272 ']' 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.574 18:39:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:07.574 [2024-10-08 18:39:56.152742] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:07.574 [2024-10-08 18:39:56.152873] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94272 ] 00:29:07.574 [2024-10-08 18:39:56.281666] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:07.574 [2024-10-08 18:39:56.294309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.574 [2024-10-08 18:39:56.327321] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.831 [2024-10-08 18:39:56.589622] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:07.831 [2024-10-08 18:39:56.589690] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:08.089 [2024-10-08 18:39:56.732480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.089 [2024-10-08 18:39:56.732540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:08.089 [2024-10-08 18:39:56.732554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:08.090 [2024-10-08 18:39:56.732562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.732622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.732632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:08.090 [2024-10-08 18:39:56.732640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:08.090 [2024-10-08 18:39:56.732647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.732681] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:08.090 [2024-10-08 18:39:56.732929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:08.090 [2024-10-08 18:39:56.732944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.732954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:08.090 [2024-10-08 18:39:56.732962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:29:08.090 [2024-10-08 18:39:56.732973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.734047] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:08.090 [2024-10-08 18:39:56.736663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.736698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:08.090 [2024-10-08 18:39:56.736715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.618 ms 00:29:08.090 [2024-10-08 18:39:56.736724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.736793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.736804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:08.090 [2024-10-08 18:39:56.736812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:08.090 [2024-10-08 18:39:56.736819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.741560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.741591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:08.090 [2024-10-08 18:39:56.741601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.686 ms 00:29:08.090 [2024-10-08 18:39:56.741609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.741649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.741657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:08.090 [2024-10-08 18:39:56.741665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:08.090 [2024-10-08 18:39:56.741672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.741716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.741725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:08.090 [2024-10-08 18:39:56.741733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:08.090 [2024-10-08 18:39:56.741743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.741781] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:08.090 [2024-10-08 18:39:56.743074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.743093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:08.090 [2024-10-08 18:39:56.743102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.301 ms 00:29:08.090 [2024-10-08 18:39:56.743109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.743136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.743144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:08.090 [2024-10-08 18:39:56.743156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:08.090 [2024-10-08 18:39:56.743163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.743183] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:08.090 [2024-10-08 18:39:56.743200] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:08.090 [2024-10-08 18:39:56.743234] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:08.090 [2024-10-08 18:39:56.743248] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:08.090 [2024-10-08 18:39:56.743351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:08.090 [2024-10-08 18:39:56.743363] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:08.090 [2024-10-08 18:39:56.743378] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:08.090 [2024-10-08 18:39:56.743388] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743397] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743404] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:08.090 [2024-10-08 18:39:56.743412] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:08.090 [2024-10-08 18:39:56.743419] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:08.090 [2024-10-08 18:39:56.743426] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:08.090 [2024-10-08 18:39:56.743434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.743440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:08.090 [2024-10-08 18:39:56.743448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:29:08.090 [2024-10-08 18:39:56.743457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.743541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.090 [2024-10-08 18:39:56.743597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:08.090 [2024-10-08 18:39:56.743607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:29:08.090 [2024-10-08 18:39:56.743614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.090 [2024-10-08 18:39:56.743717] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:08.090 [2024-10-08 18:39:56.743727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:08.090 [2024-10-08 18:39:56.743736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:08.090 [2024-10-08 18:39:56.743786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:08.090 [2024-10-08 18:39:56.743800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:08.090 [2024-10-08 18:39:56.743808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:08.090 [2024-10-08 18:39:56.743815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:08.090 [2024-10-08 18:39:56.743828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:08.090 [2024-10-08 18:39:56.743835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:08.090 [2024-10-08 18:39:56.743848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:08.090 [2024-10-08 18:39:56.743855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:08.090 [2024-10-08 18:39:56.743872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:08.090 [2024-10-08 18:39:56.743879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:08.090 [2024-10-08 18:39:56.743898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:08.090 [2024-10-08 18:39:56.743904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:08.090 [2024-10-08 18:39:56.743917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:08.090 [2024-10-08 18:39:56.743923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:08.090 [2024-10-08 18:39:56.743936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:08.090 [2024-10-08 18:39:56.743943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:08.090 [2024-10-08 18:39:56.743956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:08.090 [2024-10-08 18:39:56.743962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:08.090 [2024-10-08 18:39:56.743968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:08.090 [2024-10-08 18:39:56.743977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:08.090 [2024-10-08 18:39:56.743984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.743991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:08.090 [2024-10-08 18:39:56.743997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:08.090 [2024-10-08 18:39:56.744003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.744010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:08.090 [2024-10-08 18:39:56.744016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:08.090 [2024-10-08 18:39:56.744022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.744031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:08.090 [2024-10-08 18:39:56.744037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:08.090 [2024-10-08 18:39:56.744044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.744050] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:08.090 [2024-10-08 18:39:56.744061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:08.090 [2024-10-08 18:39:56.744068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:08.090 [2024-10-08 18:39:56.744074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:08.090 [2024-10-08 18:39:56.744082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:08.090 [2024-10-08 18:39:56.744090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:08.091 [2024-10-08 18:39:56.744097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:08.091 [2024-10-08 18:39:56.744103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:08.091 [2024-10-08 18:39:56.744109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:08.091 [2024-10-08 18:39:56.744116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:08.091 [2024-10-08 18:39:56.744123] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:08.091 [2024-10-08 18:39:56.744132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:08.091 [2024-10-08 18:39:56.744147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:08.091 [2024-10-08 18:39:56.744168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:08.091 [2024-10-08 18:39:56.744175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:08.091 [2024-10-08 18:39:56.744182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:08.091 [2024-10-08 18:39:56.744188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:08.091 [2024-10-08 18:39:56.744243] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:08.091 [2024-10-08 18:39:56.744252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744264] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:08.091 [2024-10-08 18:39:56.744272] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:08.091 [2024-10-08 18:39:56.744280] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:08.091 [2024-10-08 18:39:56.744288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:08.091 [2024-10-08 18:39:56.744297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:08.091 [2024-10-08 18:39:56.744305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:08.091 [2024-10-08 18:39:56.744316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:29:08.091 [2024-10-08 18:39:56.744326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:08.091 [2024-10-08 18:39:56.744367] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:08.091 [2024-10-08 18:39:56.744378] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:10.616 [2024-10-08 18:39:58.838590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.616 [2024-10-08 18:39:58.838655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:10.616 [2024-10-08 18:39:58.838670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2094.214 ms 00:29:10.616 [2024-10-08 18:39:58.838678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.616 [2024-10-08 18:39:58.846360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.616 [2024-10-08 18:39:58.846407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:10.616 [2024-10-08 18:39:58.846425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.582 ms 00:29:10.616 [2024-10-08 18:39:58.846433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.616 [2024-10-08 18:39:58.846496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.616 [2024-10-08 18:39:58.846505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:10.616 [2024-10-08 18:39:58.846514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:10.616 [2024-10-08 18:39:58.846522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.616 [2024-10-08 18:39:58.864118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.616 [2024-10-08 18:39:58.864165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:10.616 [2024-10-08 18:39:58.864177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.549 ms 00:29:10.616 [2024-10-08 18:39:58.864186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.616 [2024-10-08 18:39:58.864232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.616 [2024-10-08 18:39:58.864245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:10.617 [2024-10-08 18:39:58.864254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:10.617 [2024-10-08 18:39:58.864261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.864616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.864632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:10.617 [2024-10-08 18:39:58.864642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:29:10.617 [2024-10-08 18:39:58.864649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.864689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.864698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:10.617 [2024-10-08 18:39:58.864717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:10.617 [2024-10-08 18:39:58.864724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.870050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.870087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:10.617 [2024-10-08 18:39:58.870098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.303 ms 00:29:10.617 [2024-10-08 18:39:58.870108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.872446] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:29:10.617 [2024-10-08 18:39:58.872487] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:10.617 [2024-10-08 18:39:58.872501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.872510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:29:10.617 [2024-10-08 18:39:58.872520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.280 ms 00:29:10.617 [2024-10-08 18:39:58.872529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.877075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.877112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:29:10.617 [2024-10-08 18:39:58.877130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.501 ms 00:29:10.617 [2024-10-08 18:39:58.877141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.878521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.878558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:29:10.617 [2024-10-08 18:39:58.878570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.345 ms 00:29:10.617 [2024-10-08 18:39:58.878578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.880057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.880090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:29:10.617 [2024-10-08 18:39:58.880101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.439 ms 00:29:10.617 [2024-10-08 18:39:58.880110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.880510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.880536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:10.617 [2024-10-08 18:39:58.880547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:29:10.617 [2024-10-08 18:39:58.880555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.895052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.895106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:10.617 [2024-10-08 18:39:58.895123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.467 ms 00:29:10.617 [2024-10-08 18:39:58.895131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.902554] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:10.617 [2024-10-08 18:39:58.903482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.903512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:10.617 [2024-10-08 18:39:58.903530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.304 ms 00:29:10.617 [2024-10-08 18:39:58.903539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.903603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.903613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:29:10.617 [2024-10-08 18:39:58.903623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:10.617 [2024-10-08 18:39:58.903631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.903688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.903698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:10.617 [2024-10-08 18:39:58.903712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:10.617 [2024-10-08 18:39:58.903720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.903742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.903768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:10.617 [2024-10-08 18:39:58.903777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:10.617 [2024-10-08 18:39:58.903784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.903815] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:10.617 [2024-10-08 18:39:58.903826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.903833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:10.617 [2024-10-08 18:39:58.903841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:10.617 [2024-10-08 18:39:58.903848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.906796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.906832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:10.617 [2024-10-08 18:39:58.906849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.929 ms 00:29:10.617 [2024-10-08 18:39:58.906856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.906924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:58.906933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:10.617 [2024-10-08 18:39:58.906941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:10.617 [2024-10-08 18:39:58.906949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:58.907905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2175.024 ms, result 0 00:29:10.617 [2024-10-08 18:39:58.920243] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.617 [2024-10-08 18:39:58.936238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:10.617 [2024-10-08 18:39:58.944345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:10.617 18:39:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:10.617 18:39:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:10.617 18:39:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:10.617 18:39:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:10.617 18:39:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:10.617 [2024-10-08 18:39:59.160434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:59.160480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:10.617 [2024-10-08 18:39:59.160493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:10.617 [2024-10-08 18:39:59.160501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:59.160531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:59.160541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:10.617 [2024-10-08 18:39:59.160549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:10.617 [2024-10-08 18:39:59.160556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:59.160578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:10.617 [2024-10-08 18:39:59.160586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:10.617 [2024-10-08 18:39:59.160594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:10.617 [2024-10-08 18:39:59.160601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:10.617 [2024-10-08 18:39:59.160660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.215 ms, result 0 00:29:10.617 true 00:29:10.617 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:10.617 { 00:29:10.617 "name": "ftl", 00:29:10.617 "properties": [ 00:29:10.617 { 00:29:10.617 "name": "superblock_version", 00:29:10.617 "value": 5, 00:29:10.617 "read-only": true 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "name": "base_device", 00:29:10.617 "bands": [ 00:29:10.617 { 00:29:10.617 "id": 0, 00:29:10.617 "state": "CLOSED", 00:29:10.617 "validity": 1.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 1, 00:29:10.617 "state": "CLOSED", 00:29:10.617 "validity": 1.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 2, 00:29:10.617 "state": "CLOSED", 00:29:10.617 "validity": 0.007843137254901933 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 3, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 4, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 5, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 6, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 7, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 8, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 9, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 10, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 11, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 12, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 13, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 14, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 15, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 16, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 17, 00:29:10.617 "state": "FREE", 00:29:10.617 "validity": 0.0 00:29:10.617 } 00:29:10.617 ], 00:29:10.617 "read-only": true 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "name": "cache_device", 00:29:10.617 "type": "bdev", 00:29:10.617 "chunks": [ 00:29:10.617 { 00:29:10.617 "id": 0, 00:29:10.617 "state": "INACTIVE", 00:29:10.617 "utilization": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 1, 00:29:10.617 "state": "OPEN", 00:29:10.617 "utilization": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 2, 00:29:10.617 "state": "OPEN", 00:29:10.617 "utilization": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 3, 00:29:10.617 "state": "FREE", 00:29:10.617 "utilization": 0.0 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "id": 4, 00:29:10.617 "state": "FREE", 00:29:10.617 "utilization": 0.0 00:29:10.617 } 00:29:10.617 ], 00:29:10.617 "read-only": true 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "name": "verbose_mode", 00:29:10.617 "value": true, 00:29:10.617 "unit": "", 00:29:10.617 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:10.617 }, 00:29:10.617 { 00:29:10.617 "name": "prep_upgrade_on_shutdown", 00:29:10.617 "value": false, 00:29:10.617 "unit": "", 00:29:10.617 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:10.617 } 00:29:10.617 ] 00:29:10.617 } 00:29:10.617 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:29:10.617 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:10.617 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:10.876 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:29:10.876 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:29:10.876 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:29:10.876 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:29:10.876 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:11.147 Validate MD5 checksum, iteration 1 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:11.147 18:39:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:11.147 [2024-10-08 18:39:59.866651] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:11.147 [2024-10-08 18:39:59.866785] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94322 ] 00:29:11.454 [2024-10-08 18:39:59.993558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:11.454 [2024-10-08 18:40:00.012263] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.454 [2024-10-08 18:40:00.043873] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.824  [2024-10-08T18:40:01.930Z] Copying: 689/1024 [MB] (689 MBps) [2024-10-08T18:40:02.494Z] Copying: 1024/1024 [MB] (average 670 MBps) 00:29:13.644 00:29:13.644 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:13.644 18:40:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:16.186 Validate MD5 checksum, iteration 2 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fda29888a9ca0130ef4ce54320f394c8 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fda29888a9ca0130ef4ce54320f394c8 != \f\d\a\2\9\8\8\8\a\9\c\a\0\1\3\0\e\f\4\c\e\5\4\3\2\0\f\3\9\4\c\8 ]] 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:16.186 18:40:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:16.186 [2024-10-08 18:40:04.733166] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:16.186 [2024-10-08 18:40:04.733301] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94380 ] 00:29:16.186 [2024-10-08 18:40:04.862688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:16.186 [2024-10-08 18:40:04.882670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.186 [2024-10-08 18:40:04.930438] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.571  [2024-10-08T18:40:07.363Z] Copying: 528/1024 [MB] (528 MBps) [2024-10-08T18:40:11.568Z] Copying: 1024/1024 [MB] (average 532 MBps) 00:29:22.718 00:29:22.718 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:22.718 18:40:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4802f191ba18dbb468e698c73a274a51 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4802f191ba18dbb468e698c73a274a51 != \4\8\0\2\f\1\9\1\b\a\1\8\d\b\b\4\6\8\e\6\9\8\c\7\3\a\2\7\4\a\5\1 ]] 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94272 ]] 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94272 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94475 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94475 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 94475 ']' 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:24.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.616 18:40:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:24.873 [2024-10-08 18:40:13.520244] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:24.874 [2024-10-08 18:40:13.520526] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94475 ] 00:29:24.874 [2024-10-08 18:40:13.648924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:24.874 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 94272 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:29:24.874 [2024-10-08 18:40:13.668252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.874 [2024-10-08 18:40:13.699263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.131 [2024-10-08 18:40:13.960844] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:25.131 [2024-10-08 18:40:13.960902] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:29:25.390 [2024-10-08 18:40:14.099096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.099156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:25.390 [2024-10-08 18:40:14.099174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:25.390 [2024-10-08 18:40:14.099184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.099245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.099257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:25.390 [2024-10-08 18:40:14.099267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:29:25.390 [2024-10-08 18:40:14.099276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.099306] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:25.390 [2024-10-08 18:40:14.099564] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:25.390 [2024-10-08 18:40:14.099584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.099596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:25.390 [2024-10-08 18:40:14.099606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:29:25.390 [2024-10-08 18:40:14.099616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.099985] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:29:25.390 [2024-10-08 18:40:14.103460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.103611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:29:25.390 [2024-10-08 18:40:14.103632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.475 ms 00:29:25.390 [2024-10-08 18:40:14.103647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.104508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.104533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:29:25.390 [2024-10-08 18:40:14.104541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:29:25.390 [2024-10-08 18:40:14.104547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.104782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.104791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:25.390 [2024-10-08 18:40:14.104800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:29:25.390 [2024-10-08 18:40:14.104806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.104834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.104840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:25.390 [2024-10-08 18:40:14.104846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:25.390 [2024-10-08 18:40:14.104852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.104876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.104882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:25.390 [2024-10-08 18:40:14.104889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:25.390 [2024-10-08 18:40:14.104904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.104927] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:25.390 [2024-10-08 18:40:14.105653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.105678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:25.390 [2024-10-08 18:40:14.105686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.731 ms 00:29:25.390 [2024-10-08 18:40:14.105691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.105714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.105721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:25.390 [2024-10-08 18:40:14.105726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:25.390 [2024-10-08 18:40:14.105735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.105760] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:29:25.390 [2024-10-08 18:40:14.105775] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:29:25.390 [2024-10-08 18:40:14.105801] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:29:25.390 [2024-10-08 18:40:14.105814] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:29:25.390 [2024-10-08 18:40:14.105899] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:25.390 [2024-10-08 18:40:14.105913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:25.390 [2024-10-08 18:40:14.105926] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:25.390 [2024-10-08 18:40:14.105937] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:25.390 [2024-10-08 18:40:14.105945] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:25.390 [2024-10-08 18:40:14.105951] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:25.390 [2024-10-08 18:40:14.105957] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:25.390 [2024-10-08 18:40:14.105963] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:25.390 [2024-10-08 18:40:14.105972] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:25.390 [2024-10-08 18:40:14.105978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.105983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:25.390 [2024-10-08 18:40:14.105990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:29:25.390 [2024-10-08 18:40:14.105995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.106064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.390 [2024-10-08 18:40:14.106071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:25.390 [2024-10-08 18:40:14.106077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:25.390 [2024-10-08 18:40:14.106085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.390 [2024-10-08 18:40:14.106172] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:25.390 [2024-10-08 18:40:14.106181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:25.390 [2024-10-08 18:40:14.106190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:25.390 [2024-10-08 18:40:14.106196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:25.390 [2024-10-08 18:40:14.106208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:25.390 [2024-10-08 18:40:14.106218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:25.390 [2024-10-08 18:40:14.106224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:25.390 [2024-10-08 18:40:14.106230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:25.390 [2024-10-08 18:40:14.106240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:25.390 [2024-10-08 18:40:14.106245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:25.390 [2024-10-08 18:40:14.106255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:25.390 [2024-10-08 18:40:14.106265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:25.390 [2024-10-08 18:40:14.106280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:25.390 [2024-10-08 18:40:14.106285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.390 [2024-10-08 18:40:14.106291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:25.390 [2024-10-08 18:40:14.106296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:25.390 [2024-10-08 18:40:14.106301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.390 [2024-10-08 18:40:14.106305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:25.390 [2024-10-08 18:40:14.106311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:25.390 [2024-10-08 18:40:14.106317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.390 [2024-10-08 18:40:14.106323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:25.390 [2024-10-08 18:40:14.106329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:25.391 [2024-10-08 18:40:14.106334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.391 [2024-10-08 18:40:14.106340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:25.391 [2024-10-08 18:40:14.106346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:25.391 [2024-10-08 18:40:14.106352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:25.391 [2024-10-08 18:40:14.106360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:25.391 [2024-10-08 18:40:14.106366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:25.391 [2024-10-08 18:40:14.106372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:25.391 [2024-10-08 18:40:14.106384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:25.391 [2024-10-08 18:40:14.106390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:25.391 [2024-10-08 18:40:14.106402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:25.391 [2024-10-08 18:40:14.106419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:25.391 [2024-10-08 18:40:14.106425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106430] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:25.391 [2024-10-08 18:40:14.106439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:25.391 [2024-10-08 18:40:14.106448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:25.391 [2024-10-08 18:40:14.106455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:25.391 [2024-10-08 18:40:14.106465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:25.391 [2024-10-08 18:40:14.106471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:25.391 [2024-10-08 18:40:14.106477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:25.391 [2024-10-08 18:40:14.106483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:25.391 [2024-10-08 18:40:14.106489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:25.391 [2024-10-08 18:40:14.106495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:25.391 [2024-10-08 18:40:14.106502] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:25.391 [2024-10-08 18:40:14.106510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:25.391 [2024-10-08 18:40:14.106523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:25.391 [2024-10-08 18:40:14.106543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:25.391 [2024-10-08 18:40:14.106549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:25.391 [2024-10-08 18:40:14.106555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:25.391 [2024-10-08 18:40:14.106562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:25.391 [2024-10-08 18:40:14.106603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:25.391 [2024-10-08 18:40:14.106609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:25.391 [2024-10-08 18:40:14.106621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:25.391 [2024-10-08 18:40:14.106627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:25.391 [2024-10-08 18:40:14.106632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:25.391 [2024-10-08 18:40:14.106638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.106643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:25.391 [2024-10-08 18:40:14.106651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:29:25.391 [2024-10-08 18:40:14.106658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.113079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.113180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:25.391 [2024-10-08 18:40:14.113224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.381 ms 00:29:25.391 [2024-10-08 18:40:14.113253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.113294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.113315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:25.391 [2024-10-08 18:40:14.113331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:25.391 [2024-10-08 18:40:14.113379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.130193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.130379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:25.391 [2024-10-08 18:40:14.130456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.759 ms 00:29:25.391 [2024-10-08 18:40:14.130491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.130574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.130767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:25.391 [2024-10-08 18:40:14.130844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:25.391 [2024-10-08 18:40:14.130877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.131094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.131220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:25.391 [2024-10-08 18:40:14.131290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:25.391 [2024-10-08 18:40:14.131380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.131474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.131534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:25.391 [2024-10-08 18:40:14.131609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:29:25.391 [2024-10-08 18:40:14.131626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.138394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.138537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:25.391 [2024-10-08 18:40:14.138610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.735 ms 00:29:25.391 [2024-10-08 18:40:14.138643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.138982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.139110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:29:25.391 [2024-10-08 18:40:14.139188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:25.391 [2024-10-08 18:40:14.139218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.143292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.143414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:29:25.391 [2024-10-08 18:40:14.143469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.913 ms 00:29:25.391 [2024-10-08 18:40:14.143491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.144977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.145076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:25.391 [2024-10-08 18:40:14.145129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:29:25.391 [2024-10-08 18:40:14.145151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.159727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.159873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:29:25.391 [2024-10-08 18:40:14.159921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.436 ms 00:29:25.391 [2024-10-08 18:40:14.159945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.391 [2024-10-08 18:40:14.160050] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:29:25.391 [2024-10-08 18:40:14.160143] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:29:25.391 [2024-10-08 18:40:14.160321] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:29:25.391 [2024-10-08 18:40:14.160433] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:29:25.391 [2024-10-08 18:40:14.160483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.391 [2024-10-08 18:40:14.160501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:29:25.391 [2024-10-08 18:40:14.160524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.498 ms 00:29:25.391 [2024-10-08 18:40:14.160540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.392 [2024-10-08 18:40:14.160587] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:29:25.392 [2024-10-08 18:40:14.160709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.392 [2024-10-08 18:40:14.160726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:29:25.392 [2024-10-08 18:40:14.160743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.123 ms 00:29:25.392 [2024-10-08 18:40:14.160771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.392 [2024-10-08 18:40:14.162923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.392 [2024-10-08 18:40:14.163026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:29:25.392 [2024-10-08 18:40:14.163085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.097 ms 00:29:25.392 [2024-10-08 18:40:14.163106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.392 [2024-10-08 18:40:14.163644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.392 [2024-10-08 18:40:14.163729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:29:25.392 [2024-10-08 18:40:14.163786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:29:25.392 [2024-10-08 18:40:14.163805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.392 [2024-10-08 18:40:14.163878] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:29:25.392 [2024-10-08 18:40:14.164026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.392 [2024-10-08 18:40:14.164106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:25.392 [2024-10-08 18:40:14.164129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:29:25.392 [2024-10-08 18:40:14.164144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.974 [2024-10-08 18:40:14.591407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.974 [2024-10-08 18:40:14.591605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:25.974 [2024-10-08 18:40:14.591674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 426.967 ms 00:29:25.974 [2024-10-08 18:40:14.591687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.974 [2024-10-08 18:40:14.592812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.974 [2024-10-08 18:40:14.592847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:25.974 [2024-10-08 18:40:14.592858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.670 ms 00:29:25.974 [2024-10-08 18:40:14.592872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.974 [2024-10-08 18:40:14.593186] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:29:25.974 [2024-10-08 18:40:14.593207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.974 [2024-10-08 18:40:14.593216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:25.974 [2024-10-08 18:40:14.593226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:29:25.974 [2024-10-08 18:40:14.593234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.974 [2024-10-08 18:40:14.593272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.974 [2024-10-08 18:40:14.593281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:25.974 [2024-10-08 18:40:14.593290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:25.974 [2024-10-08 18:40:14.593301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:25.974 [2024-10-08 18:40:14.593334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 429.453 ms, result 0 00:29:25.974 [2024-10-08 18:40:14.593370] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:29:25.974 [2024-10-08 18:40:14.593452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:25.974 [2024-10-08 18:40:14.593462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:29:25.974 [2024-10-08 18:40:14.593477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:29:25.974 [2024-10-08 18:40:14.593484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.013425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.013501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:29:26.232 [2024-10-08 18:40:15.013517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 419.565 ms 00:29:26.232 [2024-10-08 18:40:15.013526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.014679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.014890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:29:26.232 [2024-10-08 18:40:15.014916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.707 ms 00:29:26.232 [2024-10-08 18:40:15.014932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.015267] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:29:26.232 [2024-10-08 18:40:15.015305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.015320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:29:26.232 [2024-10-08 18:40:15.015334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.346 ms 00:29:26.232 [2024-10-08 18:40:15.015345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.015388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.015402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:29:26.232 [2024-10-08 18:40:15.015415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:26.232 [2024-10-08 18:40:15.015427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.015475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 422.093 ms, result 0 00:29:26.232 [2024-10-08 18:40:15.015531] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:26.232 [2024-10-08 18:40:15.015548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:29:26.232 [2024-10-08 18:40:15.015562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.015574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:29:26.232 [2024-10-08 18:40:15.015587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 851.697 ms 00:29:26.232 [2024-10-08 18:40:15.015599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.015642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.015662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:29:26.232 [2024-10-08 18:40:15.015675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:26.232 [2024-10-08 18:40:15.015687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.023707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:26.232 [2024-10-08 18:40:15.023830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.023843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:26.232 [2024-10-08 18:40:15.023853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.119 ms 00:29:26.232 [2024-10-08 18:40:15.023865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.024560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.024593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:29:26.232 [2024-10-08 18:40:15.024602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:29:26.232 [2024-10-08 18:40:15.024613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.027188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.027222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:29:26.232 [2024-10-08 18:40:15.027232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.552 ms 00:29:26.232 [2024-10-08 18:40:15.027243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.027285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.027294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:29:26.232 [2024-10-08 18:40:15.027302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:26.232 [2024-10-08 18:40:15.027309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.027409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.027419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:26.232 [2024-10-08 18:40:15.027426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:29:26.232 [2024-10-08 18:40:15.027433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.232 [2024-10-08 18:40:15.027455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.232 [2024-10-08 18:40:15.027463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:26.232 [2024-10-08 18:40:15.027471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:26.233 [2024-10-08 18:40:15.027478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.233 [2024-10-08 18:40:15.027507] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:29:26.233 [2024-10-08 18:40:15.027518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.233 [2024-10-08 18:40:15.027525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:29:26.233 [2024-10-08 18:40:15.027532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:29:26.233 [2024-10-08 18:40:15.027540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.233 [2024-10-08 18:40:15.027594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.233 [2024-10-08 18:40:15.027603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:26.233 [2024-10-08 18:40:15.027611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:26.233 [2024-10-08 18:40:15.027618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.233 [2024-10-08 18:40:15.028629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 929.168 ms, result 0 00:29:26.233 [2024-10-08 18:40:15.044338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.233 [2024-10-08 18:40:15.060335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:26.233 [2024-10-08 18:40:15.068442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:26.490 Validate MD5 checksum, iteration 1 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:26.490 18:40:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:26.491 [2024-10-08 18:40:15.160776] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:26.491 [2024-10-08 18:40:15.161031] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94495 ] 00:29:26.491 [2024-10-08 18:40:15.289679] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:26.491 [2024-10-08 18:40:15.308047] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.491 [2024-10-08 18:40:15.338795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.861  [2024-10-08T18:40:17.275Z] Copying: 679/1024 [MB] (679 MBps) [2024-10-08T18:40:20.575Z] Copying: 1024/1024 [MB] (average 680 MBps) 00:29:31.725 00:29:31.725 18:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:29:31.725 18:40:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fda29888a9ca0130ef4ce54320f394c8 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fda29888a9ca0130ef4ce54320f394c8 != \f\d\a\2\9\8\8\8\a\9\c\a\0\1\3\0\e\f\4\c\e\5\4\3\2\0\f\3\9\4\c\8 ]] 00:29:33.621 Validate MD5 checksum, iteration 2 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:33.621 18:40:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:33.621 [2024-10-08 18:40:22.039529] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:33.621 [2024-10-08 18:40:22.039806] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94569 ] 00:29:33.621 [2024-10-08 18:40:22.166466] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:33.621 [2024-10-08 18:40:22.181742] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.621 [2024-10-08 18:40:22.214340] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.008  [2024-10-08T18:40:24.116Z] Copying: 696/1024 [MB] (696 MBps) [2024-10-08T18:40:27.404Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:29:38.554 00:29:38.554 18:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:29:38.554 18:40:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4802f191ba18dbb468e698c73a274a51 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4802f191ba18dbb468e698c73a274a51 != \4\8\0\2\f\1\9\1\b\a\1\8\d\b\b\4\6\8\e\6\9\8\c\7\3\a\2\7\4\a\5\1 ]] 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:41.079 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94475 ]] 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94475 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 94475 ']' 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 94475 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94475 00:29:41.080 killing process with pid 94475 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94475' 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 94475 00:29:41.080 18:40:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 94475 00:29:41.080 [2024-10-08 18:40:29.575030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:41.080 [2024-10-08 18:40:29.579126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.579163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:41.080 [2024-10-08 18:40:29.579177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:41.080 [2024-10-08 18:40:29.579186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.579212] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:41.080 [2024-10-08 18:40:29.579604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.579623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:41.080 [2024-10-08 18:40:29.579633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.380 ms 00:29:41.080 [2024-10-08 18:40:29.579640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.579882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.579905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:41.080 [2024-10-08 18:40:29.579915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.223 ms 00:29:41.080 [2024-10-08 18:40:29.579924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.581153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.581178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:41.080 [2024-10-08 18:40:29.581187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.213 ms 00:29:41.080 [2024-10-08 18:40:29.581201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.582369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.582481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:41.080 [2024-10-08 18:40:29.582498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.141 ms 00:29:41.080 [2024-10-08 18:40:29.582506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.583664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.583696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:41.080 [2024-10-08 18:40:29.583706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.124 ms 00:29:41.080 [2024-10-08 18:40:29.583713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.584810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.584837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:41.080 [2024-10-08 18:40:29.584851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:29:41.080 [2024-10-08 18:40:29.584858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.584938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.584947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:41.080 [2024-10-08 18:40:29.584955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:29:41.080 [2024-10-08 18:40:29.584963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.585904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.586286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:41.080 [2024-10-08 18:40:29.586377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.925 ms 00:29:41.080 [2024-10-08 18:40:29.586404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.587561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.587659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:41.080 [2024-10-08 18:40:29.587709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.067 ms 00:29:41.080 [2024-10-08 18:40:29.587731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.588736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.588832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:41.080 [2024-10-08 18:40:29.588891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.956 ms 00:29:41.080 [2024-10-08 18:40:29.588913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.589744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.589842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:41.080 [2024-10-08 18:40:29.589854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.766 ms 00:29:41.080 [2024-10-08 18:40:29.589861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.589887] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:41.080 [2024-10-08 18:40:29.589901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:41.080 [2024-10-08 18:40:29.589911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:41.080 [2024-10-08 18:40:29.589919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:41.080 [2024-10-08 18:40:29.589927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.589995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:41.080 [2024-10-08 18:40:29.590040] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:41.080 [2024-10-08 18:40:29.590051] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d9582dc7-347d-423d-bccf-d9158cc2fec8 00:29:41.080 [2024-10-08 18:40:29.590059] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:41.080 [2024-10-08 18:40:29.590067] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:29:41.080 [2024-10-08 18:40:29.590073] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:29:41.080 [2024-10-08 18:40:29.590081] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:29:41.080 [2024-10-08 18:40:29.590087] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:41.080 [2024-10-08 18:40:29.590095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:41.080 [2024-10-08 18:40:29.590102] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:41.080 [2024-10-08 18:40:29.590108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:41.080 [2024-10-08 18:40:29.590114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:41.080 [2024-10-08 18:40:29.590121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.590129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:41.080 [2024-10-08 18:40:29.590137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:29:41.080 [2024-10-08 18:40:29.590144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.591418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.591441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:41.080 [2024-10-08 18:40:29.591450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.256 ms 00:29:41.080 [2024-10-08 18:40:29.591457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.080 [2024-10-08 18:40:29.591528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.080 [2024-10-08 18:40:29.591536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:41.080 [2024-10-08 18:40:29.591543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:41.081 [2024-10-08 18:40:29.591553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.596470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.596500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:41.081 [2024-10-08 18:40:29.596510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.596516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.596544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.596552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:41.081 [2024-10-08 18:40:29.596560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.596572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.596634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.596644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:41.081 [2024-10-08 18:40:29.596652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.596659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.596675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.596683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:41.081 [2024-10-08 18:40:29.596695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.596702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.605222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.605400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:41.081 [2024-10-08 18:40:29.605452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.605474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.612001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.612133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:41.081 [2024-10-08 18:40:29.612183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.612219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.612278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.612370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:41.081 [2024-10-08 18:40:29.612399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.612418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.612484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.612892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:41.081 [2024-10-08 18:40:29.612975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.613001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.613138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.613202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:41.081 [2024-10-08 18:40:29.613257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.613280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.613360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.613416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:41.081 [2024-10-08 18:40:29.613461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.613505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.613561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.613606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:41.081 [2024-10-08 18:40:29.613630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.613639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.613683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:41.081 [2024-10-08 18:40:29.613699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:41.081 [2024-10-08 18:40:29.613707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:41.081 [2024-10-08 18:40:29.613714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.081 [2024-10-08 18:40:29.613844] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 34.688 ms, result 0 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:51.046 Remove shared memory files 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94272 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:51.046 ************************************ 00:29:51.046 END TEST ftl_upgrade_shutdown 00:29:51.046 ************************************ 00:29:51.046 00:29:51.046 real 1m32.473s 00:29:51.046 user 1m57.038s 00:29:51.046 sys 0m19.617s 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:51.046 18:40:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 18:40:38 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:29:51.046 18:40:38 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:51.046 18:40:38 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:29:51.046 18:40:38 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:51.046 18:40:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:51.046 ************************************ 00:29:51.046 START TEST ftl_restore_fast 00:29:51.046 ************************************ 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:29:51.046 * Looking for test storage... 00:29:51.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:29:51.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.046 --rc genhtml_branch_coverage=1 00:29:51.046 --rc genhtml_function_coverage=1 00:29:51.046 --rc genhtml_legend=1 00:29:51.046 --rc geninfo_all_blocks=1 00:29:51.046 --rc geninfo_unexecuted_blocks=1 00:29:51.046 00:29:51.046 ' 00:29:51.046 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:29:51.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.046 --rc genhtml_branch_coverage=1 00:29:51.046 --rc genhtml_function_coverage=1 00:29:51.047 --rc genhtml_legend=1 00:29:51.047 --rc geninfo_all_blocks=1 00:29:51.047 --rc geninfo_unexecuted_blocks=1 00:29:51.047 00:29:51.047 ' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:29:51.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.047 --rc genhtml_branch_coverage=1 00:29:51.047 --rc genhtml_function_coverage=1 00:29:51.047 --rc genhtml_legend=1 00:29:51.047 --rc geninfo_all_blocks=1 00:29:51.047 --rc geninfo_unexecuted_blocks=1 00:29:51.047 00:29:51.047 ' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:29:51.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.047 --rc genhtml_branch_coverage=1 00:29:51.047 --rc genhtml_function_coverage=1 00:29:51.047 --rc genhtml_legend=1 00:29:51.047 --rc geninfo_all_blocks=1 00:29:51.047 --rc geninfo_unexecuted_blocks=1 00:29:51.047 00:29:51.047 ' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:29:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.V8Z9Q5IBSt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=94743 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 94743 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 94743 ']' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:51.047 18:40:38 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:29:51.047 [2024-10-08 18:40:38.517688] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:29:51.047 [2024-10-08 18:40:38.517827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94743 ] 00:29:51.047 [2024-10-08 18:40:38.646635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:51.047 [2024-10-08 18:40:38.669566] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.047 [2024-10-08 18:40:38.702605] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:29:51.047 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:51.047 { 00:29:51.047 "name": "nvme0n1", 00:29:51.047 "aliases": [ 00:29:51.047 "90d67a7a-ccef-4214-9a07-17f29c77c9e7" 00:29:51.047 ], 00:29:51.047 "product_name": "NVMe disk", 00:29:51.047 "block_size": 4096, 00:29:51.047 "num_blocks": 1310720, 00:29:51.047 "uuid": "90d67a7a-ccef-4214-9a07-17f29c77c9e7", 00:29:51.047 "numa_id": -1, 00:29:51.047 "assigned_rate_limits": { 00:29:51.047 "rw_ios_per_sec": 0, 00:29:51.047 "rw_mbytes_per_sec": 0, 00:29:51.047 "r_mbytes_per_sec": 0, 00:29:51.047 "w_mbytes_per_sec": 0 00:29:51.047 }, 00:29:51.047 "claimed": true, 00:29:51.048 "claim_type": "read_many_write_one", 00:29:51.048 "zoned": false, 00:29:51.048 "supported_io_types": { 00:29:51.048 "read": true, 00:29:51.048 "write": true, 00:29:51.048 "unmap": true, 00:29:51.048 "flush": true, 00:29:51.048 "reset": true, 00:29:51.048 "nvme_admin": true, 00:29:51.048 "nvme_io": true, 00:29:51.048 "nvme_io_md": false, 00:29:51.048 "write_zeroes": true, 00:29:51.048 "zcopy": false, 00:29:51.048 "get_zone_info": false, 00:29:51.048 "zone_management": false, 00:29:51.048 "zone_append": false, 00:29:51.048 "compare": true, 00:29:51.048 "compare_and_write": false, 00:29:51.048 "abort": true, 00:29:51.048 "seek_hole": false, 00:29:51.048 "seek_data": false, 00:29:51.048 "copy": true, 00:29:51.048 "nvme_iov_md": false 00:29:51.048 }, 00:29:51.048 "driver_specific": { 00:29:51.048 "nvme": [ 00:29:51.048 { 00:29:51.048 "pci_address": "0000:00:11.0", 00:29:51.048 "trid": { 00:29:51.048 "trtype": "PCIe", 00:29:51.048 "traddr": "0000:00:11.0" 00:29:51.048 }, 00:29:51.048 "ctrlr_data": { 00:29:51.048 "cntlid": 0, 00:29:51.048 "vendor_id": "0x1b36", 00:29:51.048 "model_number": "QEMU NVMe Ctrl", 00:29:51.048 "serial_number": "12341", 00:29:51.048 "firmware_revision": "8.0.0", 00:29:51.048 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:51.048 "oacs": { 00:29:51.048 "security": 0, 00:29:51.048 "format": 1, 00:29:51.048 "firmware": 0, 00:29:51.048 "ns_manage": 1 00:29:51.048 }, 00:29:51.048 "multi_ctrlr": false, 00:29:51.048 "ana_reporting": false 00:29:51.048 }, 00:29:51.048 "vs": { 00:29:51.048 "nvme_version": "1.4" 00:29:51.048 }, 00:29:51.048 "ns_data": { 00:29:51.048 "id": 1, 00:29:51.048 "can_share": false 00:29:51.048 } 00:29:51.048 } 00:29:51.048 ], 00:29:51.048 "mp_policy": "active_passive" 00:29:51.048 } 00:29:51.048 } 00:29:51.048 ]' 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:51.048 18:40:39 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:51.306 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=1ff921e5-605b-4f63-af17-7bc37c28b86a 00:29:51.306 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:29:51.306 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ff921e5-605b-4f63-af17-7bc37c28b86a 00:29:51.563 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:29:51.821 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=f868b44e-d9b5-4f9e-afc4-b3659976d530 00:29:51.821 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f868b44e-d9b5-4f9e-afc4-b3659976d530 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:52.079 { 00:29:52.079 "name": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:52.079 "aliases": [ 00:29:52.079 "lvs/nvme0n1p0" 00:29:52.079 ], 00:29:52.079 "product_name": "Logical Volume", 00:29:52.079 "block_size": 4096, 00:29:52.079 "num_blocks": 26476544, 00:29:52.079 "uuid": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:52.079 "assigned_rate_limits": { 00:29:52.079 "rw_ios_per_sec": 0, 00:29:52.079 "rw_mbytes_per_sec": 0, 00:29:52.079 "r_mbytes_per_sec": 0, 00:29:52.079 "w_mbytes_per_sec": 0 00:29:52.079 }, 00:29:52.079 "claimed": false, 00:29:52.079 "zoned": false, 00:29:52.079 "supported_io_types": { 00:29:52.079 "read": true, 00:29:52.079 "write": true, 00:29:52.079 "unmap": true, 00:29:52.079 "flush": false, 00:29:52.079 "reset": true, 00:29:52.079 "nvme_admin": false, 00:29:52.079 "nvme_io": false, 00:29:52.079 "nvme_io_md": false, 00:29:52.079 "write_zeroes": true, 00:29:52.079 "zcopy": false, 00:29:52.079 "get_zone_info": false, 00:29:52.079 "zone_management": false, 00:29:52.079 "zone_append": false, 00:29:52.079 "compare": false, 00:29:52.079 "compare_and_write": false, 00:29:52.079 "abort": false, 00:29:52.079 "seek_hole": true, 00:29:52.079 "seek_data": true, 00:29:52.079 "copy": false, 00:29:52.079 "nvme_iov_md": false 00:29:52.079 }, 00:29:52.079 "driver_specific": { 00:29:52.079 "lvol": { 00:29:52.079 "lvol_store_uuid": "f868b44e-d9b5-4f9e-afc4-b3659976d530", 00:29:52.079 "base_bdev": "nvme0n1", 00:29:52.079 "thin_provision": true, 00:29:52.079 "num_allocated_clusters": 0, 00:29:52.079 "snapshot": false, 00:29:52.079 "clone": false, 00:29:52.079 "esnap_clone": false 00:29:52.079 } 00:29:52.079 } 00:29:52.079 } 00:29:52.079 ]' 00:29:52.079 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:29:52.336 18:40:40 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:52.594 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:52.852 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:52.852 { 00:29:52.852 "name": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:52.852 "aliases": [ 00:29:52.852 "lvs/nvme0n1p0" 00:29:52.852 ], 00:29:52.852 "product_name": "Logical Volume", 00:29:52.852 "block_size": 4096, 00:29:52.852 "num_blocks": 26476544, 00:29:52.852 "uuid": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:52.852 "assigned_rate_limits": { 00:29:52.852 "rw_ios_per_sec": 0, 00:29:52.852 "rw_mbytes_per_sec": 0, 00:29:52.852 "r_mbytes_per_sec": 0, 00:29:52.852 "w_mbytes_per_sec": 0 00:29:52.852 }, 00:29:52.852 "claimed": false, 00:29:52.852 "zoned": false, 00:29:52.852 "supported_io_types": { 00:29:52.852 "read": true, 00:29:52.852 "write": true, 00:29:52.852 "unmap": true, 00:29:52.852 "flush": false, 00:29:52.852 "reset": true, 00:29:52.852 "nvme_admin": false, 00:29:52.852 "nvme_io": false, 00:29:52.852 "nvme_io_md": false, 00:29:52.852 "write_zeroes": true, 00:29:52.852 "zcopy": false, 00:29:52.852 "get_zone_info": false, 00:29:52.852 "zone_management": false, 00:29:52.852 "zone_append": false, 00:29:52.852 "compare": false, 00:29:52.852 "compare_and_write": false, 00:29:52.852 "abort": false, 00:29:52.852 "seek_hole": true, 00:29:52.852 "seek_data": true, 00:29:52.852 "copy": false, 00:29:52.852 "nvme_iov_md": false 00:29:52.852 }, 00:29:52.852 "driver_specific": { 00:29:52.852 "lvol": { 00:29:52.852 "lvol_store_uuid": "f868b44e-d9b5-4f9e-afc4-b3659976d530", 00:29:52.852 "base_bdev": "nvme0n1", 00:29:52.852 "thin_provision": true, 00:29:52.852 "num_allocated_clusters": 0, 00:29:52.852 "snapshot": false, 00:29:52.853 "clone": false, 00:29:52.853 "esnap_clone": false 00:29:52.853 } 00:29:52.853 } 00:29:52.853 } 00:29:52.853 ]' 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:29:52.853 18:40:41 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:29:53.110 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:29:53.110 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4c8c35ce-0f44-493a-8079-b26c6c12f86e 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:29:53.111 { 00:29:53.111 "name": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:53.111 "aliases": [ 00:29:53.111 "lvs/nvme0n1p0" 00:29:53.111 ], 00:29:53.111 "product_name": "Logical Volume", 00:29:53.111 "block_size": 4096, 00:29:53.111 "num_blocks": 26476544, 00:29:53.111 "uuid": "4c8c35ce-0f44-493a-8079-b26c6c12f86e", 00:29:53.111 "assigned_rate_limits": { 00:29:53.111 "rw_ios_per_sec": 0, 00:29:53.111 "rw_mbytes_per_sec": 0, 00:29:53.111 "r_mbytes_per_sec": 0, 00:29:53.111 "w_mbytes_per_sec": 0 00:29:53.111 }, 00:29:53.111 "claimed": false, 00:29:53.111 "zoned": false, 00:29:53.111 "supported_io_types": { 00:29:53.111 "read": true, 00:29:53.111 "write": true, 00:29:53.111 "unmap": true, 00:29:53.111 "flush": false, 00:29:53.111 "reset": true, 00:29:53.111 "nvme_admin": false, 00:29:53.111 "nvme_io": false, 00:29:53.111 "nvme_io_md": false, 00:29:53.111 "write_zeroes": true, 00:29:53.111 "zcopy": false, 00:29:53.111 "get_zone_info": false, 00:29:53.111 "zone_management": false, 00:29:53.111 "zone_append": false, 00:29:53.111 "compare": false, 00:29:53.111 "compare_and_write": false, 00:29:53.111 "abort": false, 00:29:53.111 "seek_hole": true, 00:29:53.111 "seek_data": true, 00:29:53.111 "copy": false, 00:29:53.111 "nvme_iov_md": false 00:29:53.111 }, 00:29:53.111 "driver_specific": { 00:29:53.111 "lvol": { 00:29:53.111 "lvol_store_uuid": "f868b44e-d9b5-4f9e-afc4-b3659976d530", 00:29:53.111 "base_bdev": "nvme0n1", 00:29:53.111 "thin_provision": true, 00:29:53.111 "num_allocated_clusters": 0, 00:29:53.111 "snapshot": false, 00:29:53.111 "clone": false, 00:29:53.111 "esnap_clone": false 00:29:53.111 } 00:29:53.111 } 00:29:53.111 } 00:29:53.111 ]' 00:29:53.111 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4c8c35ce-0f44-493a-8079-b26c6c12f86e --l2p_dram_limit 10' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:29:53.369 18:40:41 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4c8c35ce-0f44-493a-8079-b26c6c12f86e --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:29:53.369 [2024-10-08 18:40:42.182528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.182576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:53.369 [2024-10-08 18:40:42.182589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:53.369 [2024-10-08 18:40:42.182596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.182642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.182649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:53.369 [2024-10-08 18:40:42.182659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:53.369 [2024-10-08 18:40:42.182666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.182685] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:53.369 [2024-10-08 18:40:42.182919] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:53.369 [2024-10-08 18:40:42.182933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.182943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:53.369 [2024-10-08 18:40:42.182950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:29:53.369 [2024-10-08 18:40:42.182957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.182982] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:29:53.369 [2024-10-08 18:40:42.183935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.183953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:29:53.369 [2024-10-08 18:40:42.183961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:53.369 [2024-10-08 18:40:42.183971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.188645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.188675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:53.369 [2024-10-08 18:40:42.188684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:29:53.369 [2024-10-08 18:40:42.188699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.188778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.188788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:53.369 [2024-10-08 18:40:42.188800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:53.369 [2024-10-08 18:40:42.188810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.188841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.188850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:53.369 [2024-10-08 18:40:42.188858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:53.369 [2024-10-08 18:40:42.188865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.188882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:53.369 [2024-10-08 18:40:42.190157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.190184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:53.369 [2024-10-08 18:40:42.190195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.277 ms 00:29:53.369 [2024-10-08 18:40:42.190201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.190228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.190235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:53.369 [2024-10-08 18:40:42.190244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:53.369 [2024-10-08 18:40:42.190250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.190271] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:29:53.369 [2024-10-08 18:40:42.190382] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:53.369 [2024-10-08 18:40:42.190395] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:53.369 [2024-10-08 18:40:42.190406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:53.369 [2024-10-08 18:40:42.190415] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:53.369 [2024-10-08 18:40:42.190422] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:53.369 [2024-10-08 18:40:42.190435] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:53.369 [2024-10-08 18:40:42.190443] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:53.369 [2024-10-08 18:40:42.190450] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:53.369 [2024-10-08 18:40:42.190458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:53.369 [2024-10-08 18:40:42.190465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.190471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:53.369 [2024-10-08 18:40:42.190479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:29:53.369 [2024-10-08 18:40:42.190485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.190551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.369 [2024-10-08 18:40:42.190557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:53.369 [2024-10-08 18:40:42.190564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:29:53.369 [2024-10-08 18:40:42.190569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.369 [2024-10-08 18:40:42.190647] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:53.369 [2024-10-08 18:40:42.190655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:53.369 [2024-10-08 18:40:42.190662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.369 [2024-10-08 18:40:42.190668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.369 [2024-10-08 18:40:42.190675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:53.369 [2024-10-08 18:40:42.190680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:53.369 [2024-10-08 18:40:42.190687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:53.369 [2024-10-08 18:40:42.190692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:53.369 [2024-10-08 18:40:42.190700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:53.369 [2024-10-08 18:40:42.190705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.369 [2024-10-08 18:40:42.190711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:53.370 [2024-10-08 18:40:42.190717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:53.370 [2024-10-08 18:40:42.190724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:53.370 [2024-10-08 18:40:42.190729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:53.370 [2024-10-08 18:40:42.190736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:53.370 [2024-10-08 18:40:42.190741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:53.370 [2024-10-08 18:40:42.190763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:53.370 [2024-10-08 18:40:42.190781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:53.370 [2024-10-08 18:40:42.190798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:53.370 [2024-10-08 18:40:42.190817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:53.370 [2024-10-08 18:40:42.190837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:53.370 [2024-10-08 18:40:42.190857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.370 [2024-10-08 18:40:42.190871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:53.370 [2024-10-08 18:40:42.190877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:53.370 [2024-10-08 18:40:42.190884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:53.370 [2024-10-08 18:40:42.190889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:53.370 [2024-10-08 18:40:42.190896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:53.370 [2024-10-08 18:40:42.190902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:53.370 [2024-10-08 18:40:42.190914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:53.370 [2024-10-08 18:40:42.190921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190926] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:53.370 [2024-10-08 18:40:42.190936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:53.370 [2024-10-08 18:40:42.190942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:53.370 [2024-10-08 18:40:42.190949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:53.370 [2024-10-08 18:40:42.190959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:53.370 [2024-10-08 18:40:42.190966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:53.370 [2024-10-08 18:40:42.190972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:53.370 [2024-10-08 18:40:42.190980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:53.370 [2024-10-08 18:40:42.190986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:53.370 [2024-10-08 18:40:42.190993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:53.370 [2024-10-08 18:40:42.191001] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:53.370 [2024-10-08 18:40:42.191010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:53.370 [2024-10-08 18:40:42.191025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:53.370 [2024-10-08 18:40:42.191031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:53.370 [2024-10-08 18:40:42.191040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:53.370 [2024-10-08 18:40:42.191047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:53.370 [2024-10-08 18:40:42.191055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:53.370 [2024-10-08 18:40:42.191061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:53.370 [2024-10-08 18:40:42.191069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:53.370 [2024-10-08 18:40:42.191075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:53.370 [2024-10-08 18:40:42.191082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:53.370 [2024-10-08 18:40:42.191116] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:53.370 [2024-10-08 18:40:42.191129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:53.370 [2024-10-08 18:40:42.191143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:53.370 [2024-10-08 18:40:42.191149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:53.370 [2024-10-08 18:40:42.191157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:53.370 [2024-10-08 18:40:42.191164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:53.370 [2024-10-08 18:40:42.191173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:53.370 [2024-10-08 18:40:42.191179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:29:53.370 [2024-10-08 18:40:42.191186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:53.370 [2024-10-08 18:40:42.191216] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:29:53.370 [2024-10-08 18:40:42.191224] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:29:55.894 [2024-10-08 18:40:44.325697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.325791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:29:55.894 [2024-10-08 18:40:44.325815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2134.471 ms 00:29:55.894 [2024-10-08 18:40:44.325830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.334392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.334448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:55.894 [2024-10-08 18:40:44.334461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.468 ms 00:29:55.894 [2024-10-08 18:40:44.334473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.334564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.334575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:55.894 [2024-10-08 18:40:44.334586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:29:55.894 [2024-10-08 18:40:44.334595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.342575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.342621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:55.894 [2024-10-08 18:40:44.342632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.925 ms 00:29:55.894 [2024-10-08 18:40:44.342648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.342681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.342696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:55.894 [2024-10-08 18:40:44.342705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:55.894 [2024-10-08 18:40:44.342717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.343077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.343097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:55.894 [2024-10-08 18:40:44.343106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:29:55.894 [2024-10-08 18:40:44.343116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.343225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.343237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:55.894 [2024-10-08 18:40:44.343248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:29:55.894 [2024-10-08 18:40:44.343258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.358674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.894 [2024-10-08 18:40:44.358724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:55.894 [2024-10-08 18:40:44.358737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.393 ms 00:29:55.894 [2024-10-08 18:40:44.358769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.894 [2024-10-08 18:40:44.367706] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:55.894 [2024-10-08 18:40:44.370513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.370546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:55.895 [2024-10-08 18:40:44.370563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.640 ms 00:29:55.895 [2024-10-08 18:40:44.370573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.421586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.421813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:29:55.895 [2024-10-08 18:40:44.421842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.972 ms 00:29:55.895 [2024-10-08 18:40:44.421859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.422050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.422062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:55.895 [2024-10-08 18:40:44.422073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:29:55.895 [2024-10-08 18:40:44.422080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.425192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.425340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:29:55.895 [2024-10-08 18:40:44.425361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.068 ms 00:29:55.895 [2024-10-08 18:40:44.425370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.427963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.428004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:29:55.895 [2024-10-08 18:40:44.428017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.551 ms 00:29:55.895 [2024-10-08 18:40:44.428025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.428323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.428333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:55.895 [2024-10-08 18:40:44.428346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:29:55.895 [2024-10-08 18:40:44.428358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.457928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.457980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:29:55.895 [2024-10-08 18:40:44.457994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.543 ms 00:29:55.895 [2024-10-08 18:40:44.458001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.461606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.461642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:29:55.895 [2024-10-08 18:40:44.461652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.552 ms 00:29:55.895 [2024-10-08 18:40:44.461659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.464390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.464420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:29:55.895 [2024-10-08 18:40:44.464430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:29:55.895 [2024-10-08 18:40:44.464435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.467192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.467325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:55.895 [2024-10-08 18:40:44.467344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:29:55.895 [2024-10-08 18:40:44.467351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.467384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.467392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:55.895 [2024-10-08 18:40:44.467400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:55.895 [2024-10-08 18:40:44.467406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.467460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:55.895 [2024-10-08 18:40:44.467467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:55.895 [2024-10-08 18:40:44.467474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:29:55.895 [2024-10-08 18:40:44.467480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:55.895 [2024-10-08 18:40:44.468230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2285.385 ms, result 0 00:29:55.895 { 00:29:55.895 "name": "ftl0", 00:29:55.895 "uuid": "f9394223-9466-424e-8b5d-3e12e0b5e3f4" 00:29:55.895 } 00:29:55.895 18:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:29:55.895 18:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:29:55.895 18:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:29:55.895 18:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:29:56.154 [2024-10-08 18:40:44.868981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.869031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:56.154 [2024-10-08 18:40:44.869043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:56.154 [2024-10-08 18:40:44.869052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.869072] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:56.154 [2024-10-08 18:40:44.869539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.869567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:56.154 [2024-10-08 18:40:44.869576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:29:56.154 [2024-10-08 18:40:44.869583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.869807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.869976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:56.154 [2024-10-08 18:40:44.869996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:29:56.154 [2024-10-08 18:40:44.870004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.872491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.872509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:56.154 [2024-10-08 18:40:44.872521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.465 ms 00:29:56.154 [2024-10-08 18:40:44.872528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.877400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.877503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:56.154 [2024-10-08 18:40:44.877519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:29:56.154 [2024-10-08 18:40:44.877525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.878911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.878934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:56.154 [2024-10-08 18:40:44.878942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.301 ms 00:29:56.154 [2024-10-08 18:40:44.878948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.882879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.882917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:56.154 [2024-10-08 18:40:44.882929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.557 ms 00:29:56.154 [2024-10-08 18:40:44.882937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.883039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.883051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:56.154 [2024-10-08 18:40:44.883060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:56.154 [2024-10-08 18:40:44.883066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.884633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.884742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:56.154 [2024-10-08 18:40:44.884774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:29:56.154 [2024-10-08 18:40:44.884780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.885825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.885850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:56.154 [2024-10-08 18:40:44.885858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:29:56.154 [2024-10-08 18:40:44.885864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.886738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.886774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:56.154 [2024-10-08 18:40:44.886785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:29:56.154 [2024-10-08 18:40:44.886792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.887606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.154 [2024-10-08 18:40:44.887632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:56.154 [2024-10-08 18:40:44.887641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:29:56.154 [2024-10-08 18:40:44.887647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.154 [2024-10-08 18:40:44.887672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:56.154 [2024-10-08 18:40:44.887683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.887993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:56.154 [2024-10-08 18:40:44.888069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:56.155 [2024-10-08 18:40:44.888574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:56.155 [2024-10-08 18:40:44.888582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:29:56.155 [2024-10-08 18:40:44.888588] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:29:56.155 [2024-10-08 18:40:44.888595] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:56.155 [2024-10-08 18:40:44.888601] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:56.155 [2024-10-08 18:40:44.888608] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:56.155 [2024-10-08 18:40:44.888613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:56.155 [2024-10-08 18:40:44.888621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:56.155 [2024-10-08 18:40:44.888627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:56.155 [2024-10-08 18:40:44.888633] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:56.155 [2024-10-08 18:40:44.888638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:56.155 [2024-10-08 18:40:44.888645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.155 [2024-10-08 18:40:44.888653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:56.155 [2024-10-08 18:40:44.888661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.974 ms 00:29:56.155 [2024-10-08 18:40:44.888667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.155 [2024-10-08 18:40:44.890049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.155 [2024-10-08 18:40:44.890067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:56.155 [2024-10-08 18:40:44.890076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.347 ms 00:29:56.155 [2024-10-08 18:40:44.890082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.155 [2024-10-08 18:40:44.890156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:56.155 [2024-10-08 18:40:44.890162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:56.155 [2024-10-08 18:40:44.890170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:56.155 [2024-10-08 18:40:44.890176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.155 [2024-10-08 18:40:44.895029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.155 [2024-10-08 18:40:44.895452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:56.155 [2024-10-08 18:40:44.895477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.155 [2024-10-08 18:40:44.895484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.155 [2024-10-08 18:40:44.895547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.895554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:56.156 [2024-10-08 18:40:44.895562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.895568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.895627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.895636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:56.156 [2024-10-08 18:40:44.895644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.895649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.895664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.895672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:56.156 [2024-10-08 18:40:44.895680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.895686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.904125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.904165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:56.156 [2024-10-08 18:40:44.904175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.904182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:56.156 [2024-10-08 18:40:44.911265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:56.156 [2024-10-08 18:40:44.911358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:56.156 [2024-10-08 18:40:44.911410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:56.156 [2024-10-08 18:40:44.911490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:56.156 [2024-10-08 18:40:44.911537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:56.156 [2024-10-08 18:40:44.911593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:56.156 [2024-10-08 18:40:44.911645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:56.156 [2024-10-08 18:40:44.911654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:56.156 [2024-10-08 18:40:44.911660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:56.156 [2024-10-08 18:40:44.911780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 42.760 ms, result 0 00:29:56.156 true 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 94743 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 94743 ']' 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 94743 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94743 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:56.156 killing process with pid 94743 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94743' 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 94743 00:29:56.156 18:40:44 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 94743 00:30:06.182 18:40:53 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:08.718 262144+0 records in 00:30:08.718 262144+0 records out 00:30:08.718 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.6874 s, 291 MB/s 00:30:08.718 18:40:57 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:11.247 18:40:59 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:11.247 [2024-10-08 18:40:59.696902] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:30:11.247 [2024-10-08 18:40:59.697012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94929 ] 00:30:11.247 [2024-10-08 18:40:59.826238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:11.247 [2024-10-08 18:40:59.845117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.247 [2024-10-08 18:40:59.882159] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.247 [2024-10-08 18:40:59.969830] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.247 [2024-10-08 18:40:59.969896] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:11.506 [2024-10-08 18:41:00.122650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.122712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:11.506 [2024-10-08 18:41:00.122731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:11.506 [2024-10-08 18:41:00.122739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.122802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.122813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:11.506 [2024-10-08 18:41:00.122822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:11.506 [2024-10-08 18:41:00.122830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.122852] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:11.506 [2024-10-08 18:41:00.123111] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:11.506 [2024-10-08 18:41:00.123134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.123144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:11.506 [2024-10-08 18:41:00.123153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:30:11.506 [2024-10-08 18:41:00.123162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.124320] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:11.506 [2024-10-08 18:41:00.126389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.126425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:11.506 [2024-10-08 18:41:00.126435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.071 ms 00:30:11.506 [2024-10-08 18:41:00.126443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.126509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.126519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:11.506 [2024-10-08 18:41:00.126529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:30:11.506 [2024-10-08 18:41:00.126536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.131406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.131437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:11.506 [2024-10-08 18:41:00.131450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:30:11.506 [2024-10-08 18:41:00.131458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.131529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.131538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:11.506 [2024-10-08 18:41:00.131546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:11.506 [2024-10-08 18:41:00.131553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.131590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.131599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:11.506 [2024-10-08 18:41:00.131607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:11.506 [2024-10-08 18:41:00.131618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.131638] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:11.506 [2024-10-08 18:41:00.132956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.132982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:11.506 [2024-10-08 18:41:00.132991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.323 ms 00:30:11.506 [2024-10-08 18:41:00.132998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.133025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.506 [2024-10-08 18:41:00.133033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:11.506 [2024-10-08 18:41:00.133041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:11.506 [2024-10-08 18:41:00.133057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.506 [2024-10-08 18:41:00.133085] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:11.506 [2024-10-08 18:41:00.133105] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:11.506 [2024-10-08 18:41:00.133139] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:11.506 [2024-10-08 18:41:00.133163] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:11.506 [2024-10-08 18:41:00.133274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:11.506 [2024-10-08 18:41:00.133291] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:11.507 [2024-10-08 18:41:00.133304] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:11.507 [2024-10-08 18:41:00.133314] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133332] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:11.507 [2024-10-08 18:41:00.133339] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:11.507 [2024-10-08 18:41:00.133346] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:11.507 [2024-10-08 18:41:00.133353] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:11.507 [2024-10-08 18:41:00.133360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.507 [2024-10-08 18:41:00.133372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:11.507 [2024-10-08 18:41:00.133380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:30:11.507 [2024-10-08 18:41:00.133387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.507 [2024-10-08 18:41:00.133474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.507 [2024-10-08 18:41:00.133488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:11.507 [2024-10-08 18:41:00.133496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:11.507 [2024-10-08 18:41:00.133503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.507 [2024-10-08 18:41:00.133600] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:11.507 [2024-10-08 18:41:00.133616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:11.507 [2024-10-08 18:41:00.133625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:11.507 [2024-10-08 18:41:00.133650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:11.507 [2024-10-08 18:41:00.133681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.507 [2024-10-08 18:41:00.133696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:11.507 [2024-10-08 18:41:00.133704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:11.507 [2024-10-08 18:41:00.133713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:11.507 [2024-10-08 18:41:00.133721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:11.507 [2024-10-08 18:41:00.133729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:11.507 [2024-10-08 18:41:00.133737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:11.507 [2024-10-08 18:41:00.133765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:11.507 [2024-10-08 18:41:00.133789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:11.507 [2024-10-08 18:41:00.133812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:11.507 [2024-10-08 18:41:00.133835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:11.507 [2024-10-08 18:41:00.133862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:11.507 [2024-10-08 18:41:00.133884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.507 [2024-10-08 18:41:00.133899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:11.507 [2024-10-08 18:41:00.133907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:11.507 [2024-10-08 18:41:00.133914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:11.507 [2024-10-08 18:41:00.133921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:11.507 [2024-10-08 18:41:00.133929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:11.507 [2024-10-08 18:41:00.133937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:11.507 [2024-10-08 18:41:00.133952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:11.507 [2024-10-08 18:41:00.133960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.133967] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:11.507 [2024-10-08 18:41:00.133982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:11.507 [2024-10-08 18:41:00.133991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:11.507 [2024-10-08 18:41:00.133998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:11.507 [2024-10-08 18:41:00.134008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:11.507 [2024-10-08 18:41:00.134016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:11.507 [2024-10-08 18:41:00.134023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:11.507 [2024-10-08 18:41:00.134029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:11.507 [2024-10-08 18:41:00.134036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:11.507 [2024-10-08 18:41:00.134042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:11.507 [2024-10-08 18:41:00.134051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:11.507 [2024-10-08 18:41:00.134059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:11.507 [2024-10-08 18:41:00.134074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:11.507 [2024-10-08 18:41:00.134081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:11.507 [2024-10-08 18:41:00.134088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:11.507 [2024-10-08 18:41:00.134095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:11.507 [2024-10-08 18:41:00.134104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:11.507 [2024-10-08 18:41:00.134111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:11.507 [2024-10-08 18:41:00.134118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:11.507 [2024-10-08 18:41:00.134125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:11.507 [2024-10-08 18:41:00.134132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:11.507 [2024-10-08 18:41:00.134168] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:11.507 [2024-10-08 18:41:00.134176] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:11.507 [2024-10-08 18:41:00.134195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:11.507 [2024-10-08 18:41:00.134203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:11.507 [2024-10-08 18:41:00.134210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:11.507 [2024-10-08 18:41:00.134217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.134227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:11.508 [2024-10-08 18:41:00.134234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.684 ms 00:30:11.508 [2024-10-08 18:41:00.134244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.150726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.150779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:11.508 [2024-10-08 18:41:00.150797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.437 ms 00:30:11.508 [2024-10-08 18:41:00.150806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.150897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.150906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:11.508 [2024-10-08 18:41:00.150920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:30:11.508 [2024-10-08 18:41:00.150927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.159834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.159878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:11.508 [2024-10-08 18:41:00.159890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.845 ms 00:30:11.508 [2024-10-08 18:41:00.159900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.159939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.159950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:11.508 [2024-10-08 18:41:00.159960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:11.508 [2024-10-08 18:41:00.159969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.160347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.160381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:11.508 [2024-10-08 18:41:00.160392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:30:11.508 [2024-10-08 18:41:00.160402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.160557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.160580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:11.508 [2024-10-08 18:41:00.160591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:30:11.508 [2024-10-08 18:41:00.160606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.165583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.165618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:11.508 [2024-10-08 18:41:00.165630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.946 ms 00:30:11.508 [2024-10-08 18:41:00.165640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.167797] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:11.508 [2024-10-08 18:41:00.167831] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:11.508 [2024-10-08 18:41:00.167844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.167853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:11.508 [2024-10-08 18:41:00.167861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:30:11.508 [2024-10-08 18:41:00.167874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.182450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.182490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:11.508 [2024-10-08 18:41:00.182511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.484 ms 00:30:11.508 [2024-10-08 18:41:00.182520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.184045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.184076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:11.508 [2024-10-08 18:41:00.184085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.487 ms 00:30:11.508 [2024-10-08 18:41:00.184092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.185324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.185354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:11.508 [2024-10-08 18:41:00.185362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.200 ms 00:30:11.508 [2024-10-08 18:41:00.185368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.185690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.185711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:11.508 [2024-10-08 18:41:00.185720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:30:11.508 [2024-10-08 18:41:00.185727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.200454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.200510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:11.508 [2024-10-08 18:41:00.200524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.711 ms 00:30:11.508 [2024-10-08 18:41:00.200532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.207998] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:11.508 [2024-10-08 18:41:00.210541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.210573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:11.508 [2024-10-08 18:41:00.210591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.969 ms 00:30:11.508 [2024-10-08 18:41:00.210600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.210661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.210673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:11.508 [2024-10-08 18:41:00.210682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:11.508 [2024-10-08 18:41:00.210689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.210769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.210780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:11.508 [2024-10-08 18:41:00.210788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:30:11.508 [2024-10-08 18:41:00.210803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.210822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.210833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:11.508 [2024-10-08 18:41:00.210844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:11.508 [2024-10-08 18:41:00.210851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.210884] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:11.508 [2024-10-08 18:41:00.210894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.210901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:11.508 [2024-10-08 18:41:00.210911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:11.508 [2024-10-08 18:41:00.210918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.214159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.214193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:11.508 [2024-10-08 18:41:00.214204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:30:11.508 [2024-10-08 18:41:00.214213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.214288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:11.508 [2024-10-08 18:41:00.214299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:11.508 [2024-10-08 18:41:00.214307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:11.508 [2024-10-08 18:41:00.214314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:11.508 [2024-10-08 18:41:00.215254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 92.225 ms, result 0 00:30:12.441  [2024-10-08T18:41:02.662Z] Copying: 45/1024 [MB] (45 MBps) [2024-10-08T18:41:03.597Z] Copying: 86/1024 [MB] (41 MBps) [2024-10-08T18:41:04.533Z] Copying: 123/1024 [MB] (37 MBps) [2024-10-08T18:41:05.466Z] Copying: 144/1024 [MB] (20 MBps) [2024-10-08T18:41:06.454Z] Copying: 183/1024 [MB] (38 MBps) [2024-10-08T18:41:07.389Z] Copying: 221/1024 [MB] (38 MBps) [2024-10-08T18:41:08.321Z] Copying: 245/1024 [MB] (24 MBps) [2024-10-08T18:41:09.255Z] Copying: 285/1024 [MB] (40 MBps) [2024-10-08T18:41:10.633Z] Copying: 320/1024 [MB] (35 MBps) [2024-10-08T18:41:11.571Z] Copying: 339/1024 [MB] (18 MBps) [2024-10-08T18:41:12.511Z] Copying: 355/1024 [MB] (16 MBps) [2024-10-08T18:41:13.446Z] Copying: 374/1024 [MB] (18 MBps) [2024-10-08T18:41:14.379Z] Copying: 412/1024 [MB] (38 MBps) [2024-10-08T18:41:15.343Z] Copying: 456/1024 [MB] (43 MBps) [2024-10-08T18:41:16.282Z] Copying: 501/1024 [MB] (45 MBps) [2024-10-08T18:41:17.252Z] Copying: 539/1024 [MB] (38 MBps) [2024-10-08T18:41:18.623Z] Copying: 572/1024 [MB] (32 MBps) [2024-10-08T18:41:19.557Z] Copying: 616/1024 [MB] (44 MBps) [2024-10-08T18:41:20.489Z] Copying: 661/1024 [MB] (45 MBps) [2024-10-08T18:41:21.421Z] Copying: 710/1024 [MB] (48 MBps) [2024-10-08T18:41:22.352Z] Copying: 761/1024 [MB] (51 MBps) [2024-10-08T18:41:23.290Z] Copying: 809/1024 [MB] (47 MBps) [2024-10-08T18:41:24.669Z] Copying: 838/1024 [MB] (28 MBps) [2024-10-08T18:41:25.235Z] Copying: 866/1024 [MB] (28 MBps) [2024-10-08T18:41:26.607Z] Copying: 899/1024 [MB] (32 MBps) [2024-10-08T18:41:27.539Z] Copying: 941/1024 [MB] (42 MBps) [2024-10-08T18:41:28.480Z] Copying: 980/1024 [MB] (38 MBps) [2024-10-08T18:41:28.480Z] Copying: 1023/1024 [MB] (42 MBps) [2024-10-08T18:41:28.480Z] Copying: 1024/1024 [MB] (average 36 MBps)[2024-10-08 18:41:28.243425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.630 [2024-10-08 18:41:28.243553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:39.630 [2024-10-08 18:41:28.243618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:39.630 [2024-10-08 18:41:28.243642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.630 [2024-10-08 18:41:28.243684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:39.630 [2024-10-08 18:41:28.244287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.630 [2024-10-08 18:41:28.244387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:39.630 [2024-10-08 18:41:28.244443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:30:39.630 [2024-10-08 18:41:28.244466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.630 [2024-10-08 18:41:28.245967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.630 [2024-10-08 18:41:28.246072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:39.630 [2024-10-08 18:41:28.246087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:30:39.630 [2024-10-08 18:41:28.246103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.630 [2024-10-08 18:41:28.246139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.630 [2024-10-08 18:41:28.246147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:30:39.630 [2024-10-08 18:41:28.246157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:39.630 [2024-10-08 18:41:28.246165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.630 [2024-10-08 18:41:28.246207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.630 [2024-10-08 18:41:28.246216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:30:39.630 [2024-10-08 18:41:28.246224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:39.630 [2024-10-08 18:41:28.246235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.630 [2024-10-08 18:41:28.246248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:39.630 [2024-10-08 18:41:28.246259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:39.630 [2024-10-08 18:41:28.246369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.246974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:39.631 [2024-10-08 18:41:28.247799] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:39.631 [2024-10-08 18:41:28.247806] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:30:39.632 [2024-10-08 18:41:28.247814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:39.632 [2024-10-08 18:41:28.247822] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:30:39.632 [2024-10-08 18:41:28.247829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:39.632 [2024-10-08 18:41:28.247836] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:39.632 [2024-10-08 18:41:28.247844] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:39.632 [2024-10-08 18:41:28.247852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:39.632 [2024-10-08 18:41:28.247859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:39.632 [2024-10-08 18:41:28.247865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:39.632 [2024-10-08 18:41:28.247871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:39.632 [2024-10-08 18:41:28.247879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.632 [2024-10-08 18:41:28.247887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:39.632 [2024-10-08 18:41:28.247895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:30:39.632 [2024-10-08 18:41:28.247907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.249310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.632 [2024-10-08 18:41:28.249335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:39.632 [2024-10-08 18:41:28.249345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:30:39.632 [2024-10-08 18:41:28.249361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.249439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:39.632 [2024-10-08 18:41:28.249448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:39.632 [2024-10-08 18:41:28.249458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:39.632 [2024-10-08 18:41:28.249465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.253871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.253909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:39.632 [2024-10-08 18:41:28.253919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.253926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.253985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.253994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:39.632 [2024-10-08 18:41:28.254005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.254012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.254060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.254069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:39.632 [2024-10-08 18:41:28.254077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.254084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.254099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.254106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:39.632 [2024-10-08 18:41:28.254118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.254127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.263390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.263442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:39.632 [2024-10-08 18:41:28.263453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.263461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:39.632 [2024-10-08 18:41:28.270563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:39.632 [2024-10-08 18:41:28.270625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:39.632 [2024-10-08 18:41:28.270693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:39.632 [2024-10-08 18:41:28.270858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:39.632 [2024-10-08 18:41:28.270907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.270951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.270959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:39.632 [2024-10-08 18:41:28.270967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.270974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.271012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:39.632 [2024-10-08 18:41:28.271021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:39.632 [2024-10-08 18:41:28.271029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:39.632 [2024-10-08 18:41:28.271036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:39.632 [2024-10-08 18:41:28.271151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.694 ms, result 0 00:30:40.566 00:30:40.566 00:30:40.566 18:41:29 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:30:40.566 [2024-10-08 18:41:29.298217] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:30:40.566 [2024-10-08 18:41:29.298411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95226 ] 00:30:40.824 [2024-10-08 18:41:29.444859] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:40.824 [2024-10-08 18:41:29.463635] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.824 [2024-10-08 18:41:29.498628] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.824 [2024-10-08 18:41:29.587430] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:40.824 [2024-10-08 18:41:29.587500] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:41.084 [2024-10-08 18:41:29.740849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.084 [2024-10-08 18:41:29.740910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:41.084 [2024-10-08 18:41:29.740925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:41.084 [2024-10-08 18:41:29.740934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.084 [2024-10-08 18:41:29.740992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.084 [2024-10-08 18:41:29.741003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:41.084 [2024-10-08 18:41:29.741011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:30:41.084 [2024-10-08 18:41:29.741019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.084 [2024-10-08 18:41:29.741046] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:41.084 [2024-10-08 18:41:29.741410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:41.084 [2024-10-08 18:41:29.741433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.084 [2024-10-08 18:41:29.741443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:41.084 [2024-10-08 18:41:29.741452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:30:41.084 [2024-10-08 18:41:29.741462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.084 [2024-10-08 18:41:29.741736] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:30:41.084 [2024-10-08 18:41:29.741773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.084 [2024-10-08 18:41:29.741781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:41.084 [2024-10-08 18:41:29.741789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:30:41.084 [2024-10-08 18:41:29.741797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.741849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.741860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:41.085 [2024-10-08 18:41:29.741868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:30:41.085 [2024-10-08 18:41:29.741875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.742112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.742131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:41.085 [2024-10-08 18:41:29.742140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:30:41.085 [2024-10-08 18:41:29.742152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.742217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.742226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:41.085 [2024-10-08 18:41:29.742234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:41.085 [2024-10-08 18:41:29.742241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.742262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.742270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:41.085 [2024-10-08 18:41:29.742278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:41.085 [2024-10-08 18:41:29.742289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.742305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:41.085 [2024-10-08 18:41:29.743798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.743831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:41.085 [2024-10-08 18:41:29.743841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:30:41.085 [2024-10-08 18:41:29.743849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.743882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.743890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:41.085 [2024-10-08 18:41:29.743902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:41.085 [2024-10-08 18:41:29.743909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.743926] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:41.085 [2024-10-08 18:41:29.743943] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:41.085 [2024-10-08 18:41:29.743986] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:41.085 [2024-10-08 18:41:29.744004] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:41.085 [2024-10-08 18:41:29.744105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:41.085 [2024-10-08 18:41:29.744121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:41.085 [2024-10-08 18:41:29.744131] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:41.085 [2024-10-08 18:41:29.744141] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744166] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:41.085 [2024-10-08 18:41:29.744173] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:41.085 [2024-10-08 18:41:29.744180] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:41.085 [2024-10-08 18:41:29.744187] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:41.085 [2024-10-08 18:41:29.744194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.744201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:41.085 [2024-10-08 18:41:29.744209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:30:41.085 [2024-10-08 18:41:29.744215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.744299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.085 [2024-10-08 18:41:29.744322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:41.085 [2024-10-08 18:41:29.744330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:30:41.085 [2024-10-08 18:41:29.744339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.085 [2024-10-08 18:41:29.744449] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:41.085 [2024-10-08 18:41:29.744466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:41.085 [2024-10-08 18:41:29.744476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:41.085 [2024-10-08 18:41:29.744503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:41.085 [2024-10-08 18:41:29.744527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:41.085 [2024-10-08 18:41:29.744549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:41.085 [2024-10-08 18:41:29.744557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:41.085 [2024-10-08 18:41:29.744564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:41.085 [2024-10-08 18:41:29.744571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:41.085 [2024-10-08 18:41:29.744579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:41.085 [2024-10-08 18:41:29.744587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:41.085 [2024-10-08 18:41:29.744605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:41.085 [2024-10-08 18:41:29.744632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:41.085 [2024-10-08 18:41:29.744655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:41.085 [2024-10-08 18:41:29.744677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:41.085 [2024-10-08 18:41:29.744700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:41.085 [2024-10-08 18:41:29.744723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:41.085 [2024-10-08 18:41:29.744737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:41.085 [2024-10-08 18:41:29.744762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:41.085 [2024-10-08 18:41:29.744771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:41.085 [2024-10-08 18:41:29.744778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:41.085 [2024-10-08 18:41:29.744786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:41.085 [2024-10-08 18:41:29.744793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:41.085 [2024-10-08 18:41:29.744809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:41.085 [2024-10-08 18:41:29.744816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744824] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:41.085 [2024-10-08 18:41:29.744832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:41.085 [2024-10-08 18:41:29.744841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:41.085 [2024-10-08 18:41:29.744861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:41.085 [2024-10-08 18:41:29.744869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:41.085 [2024-10-08 18:41:29.744877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:41.085 [2024-10-08 18:41:29.744884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:41.085 [2024-10-08 18:41:29.744893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:41.085 [2024-10-08 18:41:29.744899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:41.085 [2024-10-08 18:41:29.744907] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:41.085 [2024-10-08 18:41:29.744916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:41.085 [2024-10-08 18:41:29.744924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:41.085 [2024-10-08 18:41:29.744932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:41.085 [2024-10-08 18:41:29.744939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:41.085 [2024-10-08 18:41:29.744945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:41.085 [2024-10-08 18:41:29.744952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:41.086 [2024-10-08 18:41:29.744959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:41.086 [2024-10-08 18:41:29.744966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:41.086 [2024-10-08 18:41:29.744973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:41.086 [2024-10-08 18:41:29.744981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:41.086 [2024-10-08 18:41:29.744988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.744995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.745002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.745010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.745018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:41.086 [2024-10-08 18:41:29.745025] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:41.086 [2024-10-08 18:41:29.745033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.745040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:41.086 [2024-10-08 18:41:29.745047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:41.086 [2024-10-08 18:41:29.745055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:41.086 [2024-10-08 18:41:29.745062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:41.086 [2024-10-08 18:41:29.745069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.745077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:41.086 [2024-10-08 18:41:29.745084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:30:41.086 [2024-10-08 18:41:29.745091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.760911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.760961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:41.086 [2024-10-08 18:41:29.760978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.764 ms 00:30:41.086 [2024-10-08 18:41:29.760986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.761083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.761097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:41.086 [2024-10-08 18:41:29.761106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:30:41.086 [2024-10-08 18:41:29.761120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.769777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.769831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:41.086 [2024-10-08 18:41:29.769843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.581 ms 00:30:41.086 [2024-10-08 18:41:29.769853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.769899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.769917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:41.086 [2024-10-08 18:41:29.769932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:41.086 [2024-10-08 18:41:29.769940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.770032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.770044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:41.086 [2024-10-08 18:41:29.770058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:41.086 [2024-10-08 18:41:29.770066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.770196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.770214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:41.086 [2024-10-08 18:41:29.770224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:30:41.086 [2024-10-08 18:41:29.770235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.775155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.775197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:41.086 [2024-10-08 18:41:29.775209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.897 ms 00:30:41.086 [2024-10-08 18:41:29.775224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.775374] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:41.086 [2024-10-08 18:41:29.775396] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:41.086 [2024-10-08 18:41:29.775408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.775417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:41.086 [2024-10-08 18:41:29.775427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:41.086 [2024-10-08 18:41:29.775435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.788182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.788225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:41.086 [2024-10-08 18:41:29.788243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.724 ms 00:30:41.086 [2024-10-08 18:41:29.788253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.788370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.788378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:41.086 [2024-10-08 18:41:29.788387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:30:41.086 [2024-10-08 18:41:29.788394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.788448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.788468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:41.086 [2024-10-08 18:41:29.788479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:30:41.086 [2024-10-08 18:41:29.788486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.788826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.788844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:41.086 [2024-10-08 18:41:29.788852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:30:41.086 [2024-10-08 18:41:29.788861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.788877] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:30:41.086 [2024-10-08 18:41:29.788886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.788893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:41.086 [2024-10-08 18:41:29.788901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:41.086 [2024-10-08 18:41:29.788911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.796860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:41.086 [2024-10-08 18:41:29.796997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.797012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:41.086 [2024-10-08 18:41:29.797028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.070 ms 00:30:41.086 [2024-10-08 18:41:29.797037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.799296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.799322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:41.086 [2024-10-08 18:41:29.799332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:30:41.086 [2024-10-08 18:41:29.799340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.799420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.799429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:41.086 [2024-10-08 18:41:29.799438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:30:41.086 [2024-10-08 18:41:29.799445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.799487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.799496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:41.086 [2024-10-08 18:41:29.799504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:41.086 [2024-10-08 18:41:29.799510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.799539] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:41.086 [2024-10-08 18:41:29.799556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.799564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:41.086 [2024-10-08 18:41:29.799572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:30:41.086 [2024-10-08 18:41:29.799579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.803247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.803290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:41.086 [2024-10-08 18:41:29.803301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.645 ms 00:30:41.086 [2024-10-08 18:41:29.803309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.803379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:41.086 [2024-10-08 18:41:29.803389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:41.086 [2024-10-08 18:41:29.803398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:30:41.086 [2024-10-08 18:41:29.803406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:41.086 [2024-10-08 18:41:29.804373] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.070 ms, result 0 00:30:42.493  [2024-10-08T18:41:32.273Z] Copying: 45/1024 [MB] (45 MBps) [2024-10-08T18:41:33.205Z] Copying: 90/1024 [MB] (45 MBps) [2024-10-08T18:41:34.138Z] Copying: 136/1024 [MB] (46 MBps) [2024-10-08T18:41:35.073Z] Copying: 185/1024 [MB] (49 MBps) [2024-10-08T18:41:36.008Z] Copying: 231/1024 [MB] (46 MBps) [2024-10-08T18:41:36.981Z] Copying: 279/1024 [MB] (47 MBps) [2024-10-08T18:41:38.357Z] Copying: 324/1024 [MB] (45 MBps) [2024-10-08T18:41:39.318Z] Copying: 375/1024 [MB] (50 MBps) [2024-10-08T18:41:40.249Z] Copying: 420/1024 [MB] (44 MBps) [2024-10-08T18:41:41.182Z] Copying: 464/1024 [MB] (44 MBps) [2024-10-08T18:41:42.115Z] Copying: 511/1024 [MB] (46 MBps) [2024-10-08T18:41:43.047Z] Copying: 558/1024 [MB] (46 MBps) [2024-10-08T18:41:43.982Z] Copying: 604/1024 [MB] (46 MBps) [2024-10-08T18:41:45.354Z] Copying: 652/1024 [MB] (47 MBps) [2024-10-08T18:41:46.310Z] Copying: 699/1024 [MB] (47 MBps) [2024-10-08T18:41:47.242Z] Copying: 747/1024 [MB] (48 MBps) [2024-10-08T18:41:48.204Z] Copying: 794/1024 [MB] (47 MBps) [2024-10-08T18:41:49.139Z] Copying: 843/1024 [MB] (48 MBps) [2024-10-08T18:41:50.101Z] Copying: 890/1024 [MB] (46 MBps) [2024-10-08T18:41:51.043Z] Copying: 937/1024 [MB] (46 MBps) [2024-10-08T18:41:51.976Z] Copying: 982/1024 [MB] (45 MBps) [2024-10-08T18:41:52.237Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-08 18:41:52.079524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.387 [2024-10-08 18:41:52.079591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:03.387 [2024-10-08 18:41:52.079606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:03.387 [2024-10-08 18:41:52.079614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.387 [2024-10-08 18:41:52.079636] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:03.387 [2024-10-08 18:41:52.080281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.387 [2024-10-08 18:41:52.080312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:03.387 [2024-10-08 18:41:52.080322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:31:03.387 [2024-10-08 18:41:52.080329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.387 [2024-10-08 18:41:52.080551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.387 [2024-10-08 18:41:52.080569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:03.387 [2024-10-08 18:41:52.080579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:31:03.387 [2024-10-08 18:41:52.080587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.387 [2024-10-08 18:41:52.080614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.387 [2024-10-08 18:41:52.080626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:03.387 [2024-10-08 18:41:52.080635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:03.387 [2024-10-08 18:41:52.080643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.387 [2024-10-08 18:41:52.080696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.387 [2024-10-08 18:41:52.080706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:03.387 [2024-10-08 18:41:52.080714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:31:03.387 [2024-10-08 18:41:52.080722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.387 [2024-10-08 18:41:52.080736] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:03.387 [2024-10-08 18:41:52.080766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.080999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:03.387 [2024-10-08 18:41:52.081253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:03.388 [2024-10-08 18:41:52.081587] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:03.388 [2024-10-08 18:41:52.081598] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:31:03.388 [2024-10-08 18:41:52.081608] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:03.388 [2024-10-08 18:41:52.081623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:31:03.388 [2024-10-08 18:41:52.081637] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:03.388 [2024-10-08 18:41:52.081645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:03.388 [2024-10-08 18:41:52.081652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:03.388 [2024-10-08 18:41:52.081666] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:03.388 [2024-10-08 18:41:52.081674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:03.388 [2024-10-08 18:41:52.081681] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:03.388 [2024-10-08 18:41:52.081687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:03.388 [2024-10-08 18:41:52.081694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.388 [2024-10-08 18:41:52.081701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:03.388 [2024-10-08 18:41:52.081709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:31:03.388 [2024-10-08 18:41:52.081716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.083376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.388 [2024-10-08 18:41:52.083414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:03.388 [2024-10-08 18:41:52.083431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.645 ms 00:31:03.388 [2024-10-08 18:41:52.083441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.083525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.388 [2024-10-08 18:41:52.083534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:03.388 [2024-10-08 18:41:52.083544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:03.388 [2024-10-08 18:41:52.083557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.088738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.088797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:03.388 [2024-10-08 18:41:52.088811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.088821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.088901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.088911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:03.388 [2024-10-08 18:41:52.088921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.088933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.088969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.088979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:03.388 [2024-10-08 18:41:52.088993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.089002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.089020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.089034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:03.388 [2024-10-08 18:41:52.089043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.089052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.099218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.099274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:03.388 [2024-10-08 18:41:52.099285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.099293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:03.388 [2024-10-08 18:41:52.107433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:03.388 [2024-10-08 18:41:52.107526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:03.388 [2024-10-08 18:41:52.107571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:03.388 [2024-10-08 18:41:52.107645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:03.388 [2024-10-08 18:41:52.107701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:03.388 [2024-10-08 18:41:52.107786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.388 [2024-10-08 18:41:52.107793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.388 [2024-10-08 18:41:52.107834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:03.388 [2024-10-08 18:41:52.107851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:03.388 [2024-10-08 18:41:52.107859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:03.389 [2024-10-08 18:41:52.107866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.389 [2024-10-08 18:41:52.107976] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 28.434 ms, result 0 00:31:03.665 00:31:03.665 00:31:03.665 18:41:52 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:06.189 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:06.189 18:41:54 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:31:06.189 [2024-10-08 18:41:54.526023] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:31:06.189 [2024-10-08 18:41:54.526143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95481 ] 00:31:06.189 [2024-10-08 18:41:54.654650] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:06.189 [2024-10-08 18:41:54.677089] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.189 [2024-10-08 18:41:54.711796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.189 [2024-10-08 18:41:54.800655] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:06.189 [2024-10-08 18:41:54.800725] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:06.189 [2024-10-08 18:41:54.954436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.954492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:06.189 [2024-10-08 18:41:54.954511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:06.189 [2024-10-08 18:41:54.954521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.954577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.954589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:06.189 [2024-10-08 18:41:54.954602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:06.189 [2024-10-08 18:41:54.954614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.954635] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:06.189 [2024-10-08 18:41:54.954914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:06.189 [2024-10-08 18:41:54.954941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.954953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:06.189 [2024-10-08 18:41:54.954961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:31:06.189 [2024-10-08 18:41:54.954974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.955251] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:06.189 [2024-10-08 18:41:54.955280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.955289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:06.189 [2024-10-08 18:41:54.955303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:31:06.189 [2024-10-08 18:41:54.955311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.955359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.955378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:06.189 [2024-10-08 18:41:54.955387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:06.189 [2024-10-08 18:41:54.955395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.955636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.955654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:06.189 [2024-10-08 18:41:54.955664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:31:06.189 [2024-10-08 18:41:54.955673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.955738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.955776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:06.189 [2024-10-08 18:41:54.955785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:31:06.189 [2024-10-08 18:41:54.955793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.189 [2024-10-08 18:41:54.955815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.189 [2024-10-08 18:41:54.955823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:06.189 [2024-10-08 18:41:54.955832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:06.190 [2024-10-08 18:41:54.955840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.190 [2024-10-08 18:41:54.955861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:06.190 [2024-10-08 18:41:54.957334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.190 [2024-10-08 18:41:54.957365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:06.190 [2024-10-08 18:41:54.957375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:31:06.190 [2024-10-08 18:41:54.957383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.190 [2024-10-08 18:41:54.957413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.190 [2024-10-08 18:41:54.957422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:06.190 [2024-10-08 18:41:54.957431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:06.190 [2024-10-08 18:41:54.957438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.190 [2024-10-08 18:41:54.957457] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:06.190 [2024-10-08 18:41:54.957476] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:06.190 [2024-10-08 18:41:54.957517] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:06.190 [2024-10-08 18:41:54.957532] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:06.190 [2024-10-08 18:41:54.957635] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:06.190 [2024-10-08 18:41:54.957652] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:06.190 [2024-10-08 18:41:54.957664] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:06.190 [2024-10-08 18:41:54.957676] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:06.190 [2024-10-08 18:41:54.957685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:06.190 [2024-10-08 18:41:54.957698] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:06.190 [2024-10-08 18:41:54.957710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:06.190 [2024-10-08 18:41:54.957718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:06.190 [2024-10-08 18:41:54.957725] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:06.190 [2024-10-08 18:41:54.957733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.190 [2024-10-08 18:41:54.957743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:06.190 [2024-10-08 18:41:54.957762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:31:06.190 [2024-10-08 18:41:54.957770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.190 [2024-10-08 18:41:54.957853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.190 [2024-10-08 18:41:54.957866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:06.190 [2024-10-08 18:41:54.957878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:31:06.190 [2024-10-08 18:41:54.957888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.190 [2024-10-08 18:41:54.957998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:06.190 [2024-10-08 18:41:54.958010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:06.190 [2024-10-08 18:41:54.958021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:06.190 [2024-10-08 18:41:54.958049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:06.190 [2024-10-08 18:41:54.958075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.190 [2024-10-08 18:41:54.958098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:06.190 [2024-10-08 18:41:54.958107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:06.190 [2024-10-08 18:41:54.958115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:06.190 [2024-10-08 18:41:54.958123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:06.190 [2024-10-08 18:41:54.958130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:06.190 [2024-10-08 18:41:54.958138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:06.190 [2024-10-08 18:41:54.958153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:06.190 [2024-10-08 18:41:54.958183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:06.190 [2024-10-08 18:41:54.958209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:06.190 [2024-10-08 18:41:54.958232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:06.190 [2024-10-08 18:41:54.958254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:06.190 [2024-10-08 18:41:54.958277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.190 [2024-10-08 18:41:54.958291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:06.190 [2024-10-08 18:41:54.958305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:06.190 [2024-10-08 18:41:54.958313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:06.190 [2024-10-08 18:41:54.958321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:06.190 [2024-10-08 18:41:54.958328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:06.190 [2024-10-08 18:41:54.958337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:06.190 [2024-10-08 18:41:54.958353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:06.190 [2024-10-08 18:41:54.958360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958367] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:06.190 [2024-10-08 18:41:54.958377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:06.190 [2024-10-08 18:41:54.958385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:06.190 [2024-10-08 18:41:54.958407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:06.190 [2024-10-08 18:41:54.958414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:06.190 [2024-10-08 18:41:54.958422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:06.190 [2024-10-08 18:41:54.958430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:06.190 [2024-10-08 18:41:54.958439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:06.190 [2024-10-08 18:41:54.958448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:06.190 [2024-10-08 18:41:54.958458] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:06.190 [2024-10-08 18:41:54.958468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:06.190 [2024-10-08 18:41:54.958486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:06.190 [2024-10-08 18:41:54.958495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:06.190 [2024-10-08 18:41:54.958503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:06.190 [2024-10-08 18:41:54.958513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:06.190 [2024-10-08 18:41:54.958520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:06.190 [2024-10-08 18:41:54.958529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:06.190 [2024-10-08 18:41:54.958538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:06.190 [2024-10-08 18:41:54.958546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:06.190 [2024-10-08 18:41:54.958554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:06.190 [2024-10-08 18:41:54.958597] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:06.190 [2024-10-08 18:41:54.958606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:06.190 [2024-10-08 18:41:54.958616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:06.191 [2024-10-08 18:41:54.958624] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:06.191 [2024-10-08 18:41:54.958632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:06.191 [2024-10-08 18:41:54.958641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:06.191 [2024-10-08 18:41:54.958650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.958658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:06.191 [2024-10-08 18:41:54.958666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:31:06.191 [2024-10-08 18:41:54.958675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.972929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.972978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:06.191 [2024-10-08 18:41:54.972995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.197 ms 00:31:06.191 [2024-10-08 18:41:54.973004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.973106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.973117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:06.191 [2024-10-08 18:41:54.973126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:06.191 [2024-10-08 18:41:54.973141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.982791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.982844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:06.191 [2024-10-08 18:41:54.982859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.570 ms 00:31:06.191 [2024-10-08 18:41:54.982869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.982922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.982936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:06.191 [2024-10-08 18:41:54.982947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:06.191 [2024-10-08 18:41:54.982957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.983058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.983080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:06.191 [2024-10-08 18:41:54.983098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:06.191 [2024-10-08 18:41:54.983109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.983262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.983282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:06.191 [2024-10-08 18:41:54.983293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:31:06.191 [2024-10-08 18:41:54.983307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.988218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.988258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:06.191 [2024-10-08 18:41:54.988268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.862 ms 00:31:06.191 [2024-10-08 18:41:54.988281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:54.988400] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:06.191 [2024-10-08 18:41:54.988415] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:06.191 [2024-10-08 18:41:54.988424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:54.988432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:06.191 [2024-10-08 18:41:54.988442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:06.191 [2024-10-08 18:41:54.988450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.000757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.000800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:06.191 [2024-10-08 18:41:55.000820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.276 ms 00:31:06.191 [2024-10-08 18:41:55.000831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.000954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.000970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:06.191 [2024-10-08 18:41:55.000979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:31:06.191 [2024-10-08 18:41:55.000987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.001045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.001055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:06.191 [2024-10-08 18:41:55.001071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:06.191 [2024-10-08 18:41:55.001079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.001408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.001429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:06.191 [2024-10-08 18:41:55.001438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:31:06.191 [2024-10-08 18:41:55.001447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.001464] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:06.191 [2024-10-08 18:41:55.001473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.001482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:06.191 [2024-10-08 18:41:55.001489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:06.191 [2024-10-08 18:41:55.001499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.009500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:06.191 [2024-10-08 18:41:55.009648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.009659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:06.191 [2024-10-08 18:41:55.009669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.131 ms 00:31:06.191 [2024-10-08 18:41:55.009677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.011964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.011993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:06.191 [2024-10-08 18:41:55.012004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.260 ms 00:31:06.191 [2024-10-08 18:41:55.012013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.012095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.012105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:06.191 [2024-10-08 18:41:55.012114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:31:06.191 [2024-10-08 18:41:55.012122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.012163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.012172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:06.191 [2024-10-08 18:41:55.012180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:06.191 [2024-10-08 18:41:55.012187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.012217] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:06.191 [2024-10-08 18:41:55.012229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.012238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:06.191 [2024-10-08 18:41:55.012246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:06.191 [2024-10-08 18:41:55.012254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.015883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.015923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:06.191 [2024-10-08 18:41:55.015933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.610 ms 00:31:06.191 [2024-10-08 18:41:55.015942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.016014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:06.191 [2024-10-08 18:41:55.016024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:06.191 [2024-10-08 18:41:55.016033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:06.191 [2024-10-08 18:41:55.016041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:06.191 [2024-10-08 18:41:55.016939] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.096 ms, result 0 00:31:07.564  [2024-10-08T18:41:57.355Z] Copying: 44/1024 [MB] (44 MBps) [2024-10-08T18:41:58.286Z] Copying: 90/1024 [MB] (45 MBps) [2024-10-08T18:41:59.263Z] Copying: 135/1024 [MB] (45 MBps) [2024-10-08T18:42:00.195Z] Copying: 181/1024 [MB] (45 MBps) [2024-10-08T18:42:01.130Z] Copying: 224/1024 [MB] (43 MBps) [2024-10-08T18:42:02.062Z] Copying: 263/1024 [MB] (39 MBps) [2024-10-08T18:42:03.434Z] Copying: 305/1024 [MB] (42 MBps) [2024-10-08T18:42:04.366Z] Copying: 350/1024 [MB] (44 MBps) [2024-10-08T18:42:05.320Z] Copying: 394/1024 [MB] (44 MBps) [2024-10-08T18:42:06.251Z] Copying: 438/1024 [MB] (43 MBps) [2024-10-08T18:42:07.183Z] Copying: 481/1024 [MB] (43 MBps) [2024-10-08T18:42:08.116Z] Copying: 524/1024 [MB] (43 MBps) [2024-10-08T18:42:09.093Z] Copying: 569/1024 [MB] (44 MBps) [2024-10-08T18:42:10.464Z] Copying: 612/1024 [MB] (43 MBps) [2024-10-08T18:42:11.062Z] Copying: 656/1024 [MB] (44 MBps) [2024-10-08T18:42:12.434Z] Copying: 705/1024 [MB] (48 MBps) [2024-10-08T18:42:13.057Z] Copying: 749/1024 [MB] (43 MBps) [2024-10-08T18:42:14.428Z] Copying: 793/1024 [MB] (44 MBps) [2024-10-08T18:42:15.361Z] Copying: 836/1024 [MB] (43 MBps) [2024-10-08T18:42:16.294Z] Copying: 882/1024 [MB] (46 MBps) [2024-10-08T18:42:17.227Z] Copying: 927/1024 [MB] (44 MBps) [2024-10-08T18:42:18.401Z] Copying: 971/1024 [MB] (44 MBps) [2024-10-08T18:42:19.333Z] Copying: 1015/1024 [MB] (43 MBps) [2024-10-08T18:42:19.333Z] Copying: 1048416/1048576 [kB] (8524 kBps) [2024-10-08T18:42:19.333Z] Copying: 1024/1024 [MB] (average 42 MBps)[2024-10-08 18:42:19.175305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.483 [2024-10-08 18:42:19.175373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:30.483 [2024-10-08 18:42:19.175385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:30.483 [2024-10-08 18:42:19.175392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.483 [2024-10-08 18:42:19.176404] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:30.483 [2024-10-08 18:42:19.178634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.483 [2024-10-08 18:42:19.178747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:30.483 [2024-10-08 18:42:19.178819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:31:30.483 [2024-10-08 18:42:19.178841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.483 [2024-10-08 18:42:19.187233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.483 [2024-10-08 18:42:19.187350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:30.483 [2024-10-08 18:42:19.187418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.512 ms 00:31:30.483 [2024-10-08 18:42:19.187441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.483 [2024-10-08 18:42:19.187476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.483 [2024-10-08 18:42:19.187494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:30.483 [2024-10-08 18:42:19.187554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:30.483 [2024-10-08 18:42:19.187583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.483 [2024-10-08 18:42:19.187636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.483 [2024-10-08 18:42:19.187713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:30.483 [2024-10-08 18:42:19.187798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:31:30.483 [2024-10-08 18:42:19.187833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.483 [2024-10-08 18:42:19.187902] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:30.483 [2024-10-08 18:42:19.187924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:31:30.483 [2024-10-08 18:42:19.187978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:30.483 [2024-10-08 18:42:19.188437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.188954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:30.484 [2024-10-08 18:42:19.189478] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:30.484 [2024-10-08 18:42:19.189488] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:31:30.484 [2024-10-08 18:42:19.189498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:31:30.484 [2024-10-08 18:42:19.189504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129824 00:31:30.484 [2024-10-08 18:42:19.189515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:31:30.484 [2024-10-08 18:42:19.189522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:31:30.484 [2024-10-08 18:42:19.189527] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:30.484 [2024-10-08 18:42:19.189534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:30.484 [2024-10-08 18:42:19.189542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:30.485 [2024-10-08 18:42:19.189547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:30.485 [2024-10-08 18:42:19.189552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:30.485 [2024-10-08 18:42:19.189558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.485 [2024-10-08 18:42:19.189564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:30.485 [2024-10-08 18:42:19.189570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:31:30.485 [2024-10-08 18:42:19.189576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.190990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.485 [2024-10-08 18:42:19.191023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:30.485 [2024-10-08 18:42:19.191031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:31:30.485 [2024-10-08 18:42:19.191038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.191116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:30.485 [2024-10-08 18:42:19.191123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:30.485 [2024-10-08 18:42:19.191131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:31:30.485 [2024-10-08 18:42:19.191137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.195386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.195432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:30.485 [2024-10-08 18:42:19.195443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.195449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.195505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.195513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:30.485 [2024-10-08 18:42:19.195519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.195525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.195568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.195579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:30.485 [2024-10-08 18:42:19.195585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.195594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.195606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.195612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:30.485 [2024-10-08 18:42:19.195618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.195624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.204346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.204394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:30.485 [2024-10-08 18:42:19.204407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.204413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:30.485 [2024-10-08 18:42:19.211428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:30.485 [2024-10-08 18:42:19.211473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:30.485 [2024-10-08 18:42:19.211536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:30.485 [2024-10-08 18:42:19.211593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:30.485 [2024-10-08 18:42:19.211635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:30.485 [2024-10-08 18:42:19.211687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:30.485 [2024-10-08 18:42:19.211740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:30.485 [2024-10-08 18:42:19.211747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:30.485 [2024-10-08 18:42:19.211764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:30.485 [2024-10-08 18:42:19.211862] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 38.791 ms, result 0 00:31:31.863 00:31:31.863 00:31:31.863 18:42:20 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:31:31.863 [2024-10-08 18:42:20.704651] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:31:31.863 [2024-10-08 18:42:20.704787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95744 ] 00:31:32.121 [2024-10-08 18:42:20.833625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:32.121 [2024-10-08 18:42:20.852250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:32.121 [2024-10-08 18:42:20.884662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:32.121 [2024-10-08 18:42:20.969343] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:32.121 [2024-10-08 18:42:20.969402] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:32.379 [2024-10-08 18:42:21.116177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.116232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:32.380 [2024-10-08 18:42:21.116250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:32.380 [2024-10-08 18:42:21.116257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.116299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.116308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:32.380 [2024-10-08 18:42:21.116317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:31:32.380 [2024-10-08 18:42:21.116323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.116341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:32.380 [2024-10-08 18:42:21.116527] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:32.380 [2024-10-08 18:42:21.116538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.116545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:32.380 [2024-10-08 18:42:21.116552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:31:32.380 [2024-10-08 18:42:21.116559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.116805] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:31:32.380 [2024-10-08 18:42:21.116830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.116837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:32.380 [2024-10-08 18:42:21.116844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:32.380 [2024-10-08 18:42:21.116850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.116890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.116899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:32.380 [2024-10-08 18:42:21.116905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:32.380 [2024-10-08 18:42:21.116912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.117101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.117110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:32.380 [2024-10-08 18:42:21.117120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:31:32.380 [2024-10-08 18:42:21.117130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.117189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.117196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:32.380 [2024-10-08 18:42:21.117205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:31:32.380 [2024-10-08 18:42:21.117213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.117229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.117239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:32.380 [2024-10-08 18:42:21.117245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:32.380 [2024-10-08 18:42:21.117254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.117267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:32.380 [2024-10-08 18:42:21.118633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.118652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:32.380 [2024-10-08 18:42:21.118659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:31:32.380 [2024-10-08 18:42:21.118671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.118701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.118711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:32.380 [2024-10-08 18:42:21.118720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:32.380 [2024-10-08 18:42:21.118725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.118740] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:32.380 [2024-10-08 18:42:21.118765] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:32.380 [2024-10-08 18:42:21.118798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:32.380 [2024-10-08 18:42:21.118813] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:32.380 [2024-10-08 18:42:21.118893] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:32.380 [2024-10-08 18:42:21.118904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:32.380 [2024-10-08 18:42:21.118915] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:32.380 [2024-10-08 18:42:21.118926] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:32.380 [2024-10-08 18:42:21.118933] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:32.380 [2024-10-08 18:42:21.118942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:32.380 [2024-10-08 18:42:21.118947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:32.380 [2024-10-08 18:42:21.118958] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:32.380 [2024-10-08 18:42:21.118965] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:32.380 [2024-10-08 18:42:21.118974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.118979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:32.380 [2024-10-08 18:42:21.118985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:31:32.380 [2024-10-08 18:42:21.118993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.119058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.380 [2024-10-08 18:42:21.119067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:32.380 [2024-10-08 18:42:21.119076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:31:32.380 [2024-10-08 18:42:21.119083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.380 [2024-10-08 18:42:21.119158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:32.380 [2024-10-08 18:42:21.119165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:32.380 [2024-10-08 18:42:21.119172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:32.380 [2024-10-08 18:42:21.119188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:32.380 [2024-10-08 18:42:21.119208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:32.380 [2024-10-08 18:42:21.119223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:32.380 [2024-10-08 18:42:21.119228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:32.380 [2024-10-08 18:42:21.119233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:32.380 [2024-10-08 18:42:21.119239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:32.380 [2024-10-08 18:42:21.119244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:32.380 [2024-10-08 18:42:21.119249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:32.380 [2024-10-08 18:42:21.119260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119270] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:32.380 [2024-10-08 18:42:21.119275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:32.380 [2024-10-08 18:42:21.119295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:32.380 [2024-10-08 18:42:21.119313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:32.380 [2024-10-08 18:42:21.119331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:32.380 [2024-10-08 18:42:21.119344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:32.380 [2024-10-08 18:42:21.119350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:32.380 [2024-10-08 18:42:21.119361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:32.380 [2024-10-08 18:42:21.119367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:32.380 [2024-10-08 18:42:21.119373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:32.380 [2024-10-08 18:42:21.119379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:32.380 [2024-10-08 18:42:21.119385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:32.380 [2024-10-08 18:42:21.119392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:32.380 [2024-10-08 18:42:21.119405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:32.380 [2024-10-08 18:42:21.119411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.380 [2024-10-08 18:42:21.119418] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:32.380 [2024-10-08 18:42:21.119424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:32.381 [2024-10-08 18:42:21.119431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:32.381 [2024-10-08 18:42:21.119437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:32.381 [2024-10-08 18:42:21.119446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:32.381 [2024-10-08 18:42:21.119452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:32.381 [2024-10-08 18:42:21.119458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:32.381 [2024-10-08 18:42:21.119464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:32.381 [2024-10-08 18:42:21.119470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:32.381 [2024-10-08 18:42:21.119477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:32.381 [2024-10-08 18:42:21.119483] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:32.381 [2024-10-08 18:42:21.119491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:32.381 [2024-10-08 18:42:21.119507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:32.381 [2024-10-08 18:42:21.119513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:32.381 [2024-10-08 18:42:21.119520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:32.381 [2024-10-08 18:42:21.119526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:32.381 [2024-10-08 18:42:21.119532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:32.381 [2024-10-08 18:42:21.119539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:32.381 [2024-10-08 18:42:21.119546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:32.381 [2024-10-08 18:42:21.119552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:32.381 [2024-10-08 18:42:21.119559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:32.381 [2024-10-08 18:42:21.119591] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:32.381 [2024-10-08 18:42:21.119598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:32.381 [2024-10-08 18:42:21.119614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:32.381 [2024-10-08 18:42:21.119620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:32.381 [2024-10-08 18:42:21.119627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:32.381 [2024-10-08 18:42:21.119635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.119642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:32.381 [2024-10-08 18:42:21.119648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:31:32.381 [2024-10-08 18:42:21.119658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.135348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.135421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:32.381 [2024-10-08 18:42:21.135448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.644 ms 00:31:32.381 [2024-10-08 18:42:21.135461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.135627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.135643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:32.381 [2024-10-08 18:42:21.135658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:31:32.381 [2024-10-08 18:42:21.135670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.146434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.146479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:32.381 [2024-10-08 18:42:21.146489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.619 ms 00:31:32.381 [2024-10-08 18:42:21.146499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.146540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.146547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:32.381 [2024-10-08 18:42:21.146554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:32.381 [2024-10-08 18:42:21.146560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.146640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.146653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:32.381 [2024-10-08 18:42:21.146661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:31:32.381 [2024-10-08 18:42:21.146667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.146794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.146808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:32.381 [2024-10-08 18:42:21.146815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:31:32.381 [2024-10-08 18:42:21.146821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.151035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.151069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:32.381 [2024-10-08 18:42:21.151081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.196 ms 00:31:32.381 [2024-10-08 18:42:21.151090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.151191] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:31:32.381 [2024-10-08 18:42:21.151205] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:32.381 [2024-10-08 18:42:21.151213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.151219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:32.381 [2024-10-08 18:42:21.151226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:32.381 [2024-10-08 18:42:21.151232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.160644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.160819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:32.381 [2024-10-08 18:42:21.160838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.397 ms 00:31:32.381 [2024-10-08 18:42:21.160855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.160957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.160969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:32.381 [2024-10-08 18:42:21.160982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:31:32.381 [2024-10-08 18:42:21.160988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.161037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.161044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:32.381 [2024-10-08 18:42:21.161054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:31:32.381 [2024-10-08 18:42:21.161060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.161332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.161350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:32.381 [2024-10-08 18:42:21.161357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:31:32.381 [2024-10-08 18:42:21.161363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.161376] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:31:32.381 [2024-10-08 18:42:21.161384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.161391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:32.381 [2024-10-08 18:42:21.161400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:32.381 [2024-10-08 18:42:21.161408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.167817] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:32.381 [2024-10-08 18:42:21.168007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.168018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:32.381 [2024-10-08 18:42:21.168026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.584 ms 00:31:32.381 [2024-10-08 18:42:21.168032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.170002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.170026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:32.381 [2024-10-08 18:42:21.170034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.948 ms 00:31:32.381 [2024-10-08 18:42:21.170041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.170096] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:31:32.381 [2024-10-08 18:42:21.170539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.170550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:32.381 [2024-10-08 18:42:21.170557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:31:32.381 [2024-10-08 18:42:21.170563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.170584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.381 [2024-10-08 18:42:21.170591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:32.381 [2024-10-08 18:42:21.170599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:32.381 [2024-10-08 18:42:21.170609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.381 [2024-10-08 18:42:21.170639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:32.381 [2024-10-08 18:42:21.170646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.382 [2024-10-08 18:42:21.170651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:32.382 [2024-10-08 18:42:21.170657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:32.382 [2024-10-08 18:42:21.170662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.382 [2024-10-08 18:42:21.173765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.382 [2024-10-08 18:42:21.173798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:32.382 [2024-10-08 18:42:21.173807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.087 ms 00:31:32.382 [2024-10-08 18:42:21.173814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.382 [2024-10-08 18:42:21.173872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:32.382 [2024-10-08 18:42:21.173880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:32.382 [2024-10-08 18:42:21.173887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:31:32.382 [2024-10-08 18:42:21.173893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:32.382 [2024-10-08 18:42:21.174641] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 58.134 ms, result 0 00:31:33.754  [2024-10-08T18:42:23.536Z] Copying: 47/1024 [MB] (47 MBps) [2024-10-08T18:42:24.468Z] Copying: 95/1024 [MB] (48 MBps) [2024-10-08T18:42:25.403Z] Copying: 142/1024 [MB] (46 MBps) [2024-10-08T18:42:26.337Z] Copying: 190/1024 [MB] (48 MBps) [2024-10-08T18:42:27.710Z] Copying: 240/1024 [MB] (49 MBps) [2024-10-08T18:42:28.666Z] Copying: 288/1024 [MB] (48 MBps) [2024-10-08T18:42:29.598Z] Copying: 336/1024 [MB] (48 MBps) [2024-10-08T18:42:30.541Z] Copying: 382/1024 [MB] (45 MBps) [2024-10-08T18:42:31.474Z] Copying: 428/1024 [MB] (45 MBps) [2024-10-08T18:42:32.429Z] Copying: 472/1024 [MB] (44 MBps) [2024-10-08T18:42:33.363Z] Copying: 518/1024 [MB] (45 MBps) [2024-10-08T18:42:34.749Z] Copying: 566/1024 [MB] (48 MBps) [2024-10-08T18:42:35.682Z] Copying: 614/1024 [MB] (47 MBps) [2024-10-08T18:42:36.614Z] Copying: 658/1024 [MB] (44 MBps) [2024-10-08T18:42:37.546Z] Copying: 703/1024 [MB] (44 MBps) [2024-10-08T18:42:38.480Z] Copying: 750/1024 [MB] (47 MBps) [2024-10-08T18:42:39.410Z] Copying: 795/1024 [MB] (44 MBps) [2024-10-08T18:42:40.343Z] Copying: 839/1024 [MB] (44 MBps) [2024-10-08T18:42:41.716Z] Copying: 884/1024 [MB] (45 MBps) [2024-10-08T18:42:42.649Z] Copying: 928/1024 [MB] (43 MBps) [2024-10-08T18:42:43.581Z] Copying: 973/1024 [MB] (45 MBps) [2024-10-08T18:42:43.840Z] Copying: 1010/1024 [MB] (36 MBps) [2024-10-08T18:42:43.840Z] Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-08 18:42:43.690122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.990 [2024-10-08 18:42:43.690188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:54.990 [2024-10-08 18:42:43.690204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:54.990 [2024-10-08 18:42:43.690213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.990 [2024-10-08 18:42:43.690237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:54.990 [2024-10-08 18:42:43.690900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.990 [2024-10-08 18:42:43.690920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:54.990 [2024-10-08 18:42:43.690939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:31:54.990 [2024-10-08 18:42:43.690947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.990 [2024-10-08 18:42:43.691196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.990 [2024-10-08 18:42:43.691216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:54.990 [2024-10-08 18:42:43.691231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:31:54.990 [2024-10-08 18:42:43.691240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.990 [2024-10-08 18:42:43.691270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.990 [2024-10-08 18:42:43.691372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:31:54.990 [2024-10-08 18:42:43.691382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:54.990 [2024-10-08 18:42:43.691392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.990 [2024-10-08 18:42:43.691450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.990 [2024-10-08 18:42:43.691460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:31:54.990 [2024-10-08 18:42:43.691472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:31:54.990 [2024-10-08 18:42:43.691481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.990 [2024-10-08 18:42:43.691495] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:54.990 [2024-10-08 18:42:43.691512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:31:54.990 [2024-10-08 18:42:43.691523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.691999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.692008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.692016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:54.990 [2024-10-08 18:42:43.692025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:54.991 [2024-10-08 18:42:43.692484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:54.991 [2024-10-08 18:42:43.692496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f9394223-9466-424e-8b5d-3e12e0b5e3f4 00:31:54.991 [2024-10-08 18:42:43.692504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:31:54.991 [2024-10-08 18:42:43.692512] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1312 00:31:54.991 [2024-10-08 18:42:43.692520] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1280 00:31:54.991 [2024-10-08 18:42:43.692528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0250 00:31:54.991 [2024-10-08 18:42:43.692536] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:54.991 [2024-10-08 18:42:43.692548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:54.991 [2024-10-08 18:42:43.692557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:54.991 [2024-10-08 18:42:43.692564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:54.991 [2024-10-08 18:42:43.692572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:54.991 [2024-10-08 18:42:43.692580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.991 [2024-10-08 18:42:43.692588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:54.991 [2024-10-08 18:42:43.692597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:31:54.991 [2024-10-08 18:42:43.692605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.991 [2024-10-08 18:42:43.694802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.991 [2024-10-08 18:42:43.694900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:54.991 [2024-10-08 18:42:43.694957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.180 ms 00:31:54.991 [2024-10-08 18:42:43.695026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.991 [2024-10-08 18:42:43.695128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:54.991 [2024-10-08 18:42:43.695163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:54.991 [2024-10-08 18:42:43.695241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:31:54.991 [2024-10-08 18:42:43.695268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.991 [2024-10-08 18:42:43.699768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.991 [2024-10-08 18:42:43.699873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:54.991 [2024-10-08 18:42:43.699916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.991 [2024-10-08 18:42:43.699934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.991 [2024-10-08 18:42:43.699998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.991 [2024-10-08 18:42:43.700030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:54.991 [2024-10-08 18:42:43.700054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.991 [2024-10-08 18:42:43.700068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.991 [2024-10-08 18:42:43.700121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.991 [2024-10-08 18:42:43.700145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:54.991 [2024-10-08 18:42:43.700215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.700232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.700254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.700270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:54.992 [2024-10-08 18:42:43.700284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.700300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.708441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.708606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:54.992 [2024-10-08 18:42:43.708654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.708672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.716084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.716265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:54.992 [2024-10-08 18:42:43.716318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.716336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.716374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.716420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:54.992 [2024-10-08 18:42:43.716439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.716461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.716530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.716550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:54.992 [2024-10-08 18:42:43.716639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.716656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.716712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.716776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:54.992 [2024-10-08 18:42:43.716797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.716812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.716868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.716989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:54.992 [2024-10-08 18:42:43.717047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.717065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.717129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.717148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:54.992 [2024-10-08 18:42:43.717272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.717305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.717359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:54.992 [2024-10-08 18:42:43.717378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:54.992 [2024-10-08 18:42:43.717393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:54.992 [2024-10-08 18:42:43.717435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:54.992 [2024-10-08 18:42:43.717547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 27.408 ms, result 0 00:31:55.251 00:31:55.251 00:31:55.251 18:42:43 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:57.792 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:57.792 Process with pid 94743 is not found 00:31:57.792 Remove shared memory files 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 94743 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 94743 ']' 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 94743 00:31:57.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94743) - No such process 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 94743 is not found' 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_band_md /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_l2p_l1 /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_l2p_l2 /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_l2p_l2_ctx /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_nvc_md /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_p2l_pool /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_sb /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_sb_shm /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_trim_bitmap /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_trim_log /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_trim_md /dev/hugepages/ftl_f9394223-9466-424e-8b5d-3e12e0b5e3f4_vmap 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:31:57.792 ************************************ 00:31:57.792 END TEST ftl_restore_fast 00:31:57.792 ************************************ 00:31:57.792 00:31:57.792 real 2m7.877s 00:31:57.792 user 1m58.199s 00:31:57.792 sys 0m11.374s 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:57.792 18:42:46 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:57.792 Process with pid 85202 is not found 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@14 -- # killprocess 85202 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@950 -- # '[' -z 85202 ']' 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@954 -- # kill -0 85202 00:31:57.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85202) - No such process 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 85202 is not found' 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=96030 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@20 -- # waitforlisten 96030 00:31:57.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@831 -- # '[' -z 96030 ']' 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.792 18:42:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:57.792 18:42:46 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:57.792 [2024-10-08 18:42:46.288453] Starting SPDK v25.01-pre git sha1 92108e0a2 / DPDK 24.11.0-rc0 initialization... 00:31:57.792 [2024-10-08 18:42:46.288582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96030 ] 00:31:57.792 [2024-10-08 18:42:46.417918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:57.792 [2024-10-08 18:42:46.440070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:57.792 [2024-10-08 18:42:46.476005] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:31:58.361 18:42:47 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:58.361 18:42:47 ftl -- common/autotest_common.sh@864 -- # return 0 00:31:58.361 18:42:47 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:58.620 nvme0n1 00:31:58.620 18:42:47 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:58.620 18:42:47 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:58.620 18:42:47 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:58.880 18:42:47 ftl -- ftl/common.sh@28 -- # stores=f868b44e-d9b5-4f9e-afc4-b3659976d530 00:31:58.880 18:42:47 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:58.880 18:42:47 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f868b44e-d9b5-4f9e-afc4-b3659976d530 00:31:59.140 18:42:47 ftl -- ftl/ftl.sh@23 -- # killprocess 96030 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@950 -- # '[' -z 96030 ']' 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@954 -- # kill -0 96030 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@955 -- # uname 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96030 00:31:59.140 killing process with pid 96030 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96030' 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@969 -- # kill 96030 00:31:59.140 18:42:47 ftl -- common/autotest_common.sh@974 -- # wait 96030 00:31:59.400 18:42:48 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:59.658 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:59.658 Waiting for block devices as requested 00:31:59.947 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:59.947 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:59.947 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:59.947 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:05.214 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:05.214 Remove shared memory files 00:32:05.214 18:42:53 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:05.214 18:42:53 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:05.214 18:42:53 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:05.214 18:42:53 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:05.214 18:42:53 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:05.214 18:42:53 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:05.214 18:42:53 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:05.214 ************************************ 00:32:05.214 END TEST ftl 00:32:05.214 ************************************ 00:32:05.214 00:32:05.214 real 16m32.331s 00:32:05.214 user 19m15.663s 00:32:05.214 sys 1m32.512s 00:32:05.214 18:42:53 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:05.214 18:42:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:05.214 18:42:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:05.214 18:42:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:05.214 18:42:53 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:05.214 18:42:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:05.214 18:42:53 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:32:05.214 18:42:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:05.214 18:42:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:05.214 18:42:53 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:32:05.214 18:42:53 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:32:05.214 18:42:53 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:32:05.214 18:42:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:05.214 18:42:53 -- common/autotest_common.sh@10 -- # set +x 00:32:05.214 18:42:53 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:32:05.214 18:42:53 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:05.214 18:42:53 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:05.214 18:42:53 -- common/autotest_common.sh@10 -- # set +x 00:32:06.148 INFO: APP EXITING 00:32:06.148 INFO: killing all VMs 00:32:06.148 INFO: killing vhost app 00:32:06.148 INFO: EXIT DONE 00:32:06.404 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:06.746 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:06.746 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:06.746 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:06.746 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:07.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:07.265 Cleaning 00:32:07.265 Removing: /var/run/dpdk/spdk0/config 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:07.265 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:07.265 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:07.525 Removing: /var/run/dpdk/spdk0 00:32:07.525 Removing: /var/run/dpdk/spdk_pid70632 00:32:07.525 Removing: /var/run/dpdk/spdk_pid70790 00:32:07.525 Removing: /var/run/dpdk/spdk_pid70991 00:32:07.525 Removing: /var/run/dpdk/spdk_pid71073 00:32:07.525 Removing: /var/run/dpdk/spdk_pid71096 00:32:07.525 Removing: /var/run/dpdk/spdk_pid71208 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71226 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71403 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71476 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71561 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71656 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71736 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71770 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71807 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71877 00:32:07.526 Removing: /var/run/dpdk/spdk_pid71983 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72403 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72450 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72497 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72513 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72571 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72587 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72645 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72661 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72703 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72721 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72763 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72781 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72908 00:32:07.526 Removing: /var/run/dpdk/spdk_pid72939 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73028 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73189 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73258 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73282 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73692 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73792 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73892 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73934 00:32:07.526 Removing: /var/run/dpdk/spdk_pid73956 00:32:07.526 Removing: /var/run/dpdk/spdk_pid74040 00:32:07.526 Removing: /var/run/dpdk/spdk_pid74642 00:32:07.526 Removing: /var/run/dpdk/spdk_pid74668 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75126 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75213 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75326 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75364 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75389 00:32:07.526 Removing: /var/run/dpdk/spdk_pid75409 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77236 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77351 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77365 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77378 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77419 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77423 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77435 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77480 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77484 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77496 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77535 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77539 00:32:07.526 Removing: /var/run/dpdk/spdk_pid77551 00:32:07.526 Removing: /var/run/dpdk/spdk_pid78925 00:32:07.526 Removing: /var/run/dpdk/spdk_pid79011 00:32:07.526 Removing: /var/run/dpdk/spdk_pid80405 00:32:07.526 Removing: /var/run/dpdk/spdk_pid81788 00:32:07.526 Removing: /var/run/dpdk/spdk_pid81853 00:32:07.526 Removing: /var/run/dpdk/spdk_pid81907 00:32:07.526 Removing: /var/run/dpdk/spdk_pid81961 00:32:07.526 Removing: /var/run/dpdk/spdk_pid82038 00:32:07.526 Removing: /var/run/dpdk/spdk_pid82107 00:32:07.526 Removing: /var/run/dpdk/spdk_pid82249 00:32:07.526 Removing: /var/run/dpdk/spdk_pid82596 00:32:07.526 Removing: /var/run/dpdk/spdk_pid82616 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83066 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83243 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83333 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83440 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83476 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83502 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83801 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83839 00:32:07.526 Removing: /var/run/dpdk/spdk_pid83895 00:32:07.526 Removing: /var/run/dpdk/spdk_pid84266 00:32:07.526 Removing: /var/run/dpdk/spdk_pid84406 00:32:07.526 Removing: /var/run/dpdk/spdk_pid85202 00:32:07.526 Removing: /var/run/dpdk/spdk_pid85323 00:32:07.526 Removing: /var/run/dpdk/spdk_pid85471 00:32:07.526 Removing: /var/run/dpdk/spdk_pid85557 00:32:07.526 Removing: /var/run/dpdk/spdk_pid85865 00:32:07.526 Removing: /var/run/dpdk/spdk_pid86114 00:32:07.526 Removing: /var/run/dpdk/spdk_pid86464 00:32:07.526 Removing: /var/run/dpdk/spdk_pid86637 00:32:07.526 Removing: /var/run/dpdk/spdk_pid86816 00:32:07.526 Removing: /var/run/dpdk/spdk_pid86859 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87027 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87041 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87077 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87236 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87438 00:32:07.526 Removing: /var/run/dpdk/spdk_pid87862 00:32:07.526 Removing: /var/run/dpdk/spdk_pid88612 00:32:07.526 Removing: /var/run/dpdk/spdk_pid89145 00:32:07.526 Removing: /var/run/dpdk/spdk_pid89813 00:32:07.526 Removing: /var/run/dpdk/spdk_pid89951 00:32:07.526 Removing: /var/run/dpdk/spdk_pid90032 00:32:07.526 Removing: /var/run/dpdk/spdk_pid91066 00:32:07.526 Removing: /var/run/dpdk/spdk_pid91124 00:32:07.526 Removing: /var/run/dpdk/spdk_pid92132 00:32:07.526 Removing: /var/run/dpdk/spdk_pid92800 00:32:07.526 Removing: /var/run/dpdk/spdk_pid93787 00:32:07.526 Removing: /var/run/dpdk/spdk_pid93898 00:32:07.526 Removing: /var/run/dpdk/spdk_pid93934 00:32:07.526 Removing: /var/run/dpdk/spdk_pid93989 00:32:07.526 Removing: /var/run/dpdk/spdk_pid94040 00:32:07.526 Removing: /var/run/dpdk/spdk_pid94094 00:32:07.526 Removing: /var/run/dpdk/spdk_pid94272 00:32:07.526 Removing: /var/run/dpdk/spdk_pid94322 00:32:07.526 Removing: /var/run/dpdk/spdk_pid94380 00:32:07.786 Removing: /var/run/dpdk/spdk_pid94475 00:32:07.786 Removing: /var/run/dpdk/spdk_pid94495 00:32:07.786 Removing: /var/run/dpdk/spdk_pid94569 00:32:07.786 Removing: /var/run/dpdk/spdk_pid94743 00:32:07.786 Removing: /var/run/dpdk/spdk_pid94929 00:32:07.786 Removing: /var/run/dpdk/spdk_pid95226 00:32:07.786 Removing: /var/run/dpdk/spdk_pid95481 00:32:07.786 Removing: /var/run/dpdk/spdk_pid95744 00:32:07.786 Removing: /var/run/dpdk/spdk_pid96030 00:32:07.786 Clean 00:32:07.786 18:42:56 -- common/autotest_common.sh@1451 -- # return 0 00:32:07.786 18:42:56 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:32:07.786 18:42:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.786 18:42:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.786 18:42:56 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:32:07.786 18:42:56 -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:07.786 18:42:56 -- common/autotest_common.sh@10 -- # set +x 00:32:07.786 18:42:56 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:07.786 18:42:56 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:07.786 18:42:56 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:07.786 18:42:56 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:32:07.786 18:42:56 -- spdk/autotest.sh@394 -- # hostname 00:32:07.786 18:42:56 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:07.786 geninfo: WARNING: invalid characters removed from testname! 00:32:34.359 18:43:19 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:34.359 18:43:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:36.255 18:43:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:38.148 18:43:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:40.672 18:43:29 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:42.645 18:43:31 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:44.544 18:43:33 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:44.544 18:43:33 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:32:44.545 18:43:33 -- common/autotest_common.sh@1681 -- $ lcov --version 00:32:44.545 18:43:33 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:32:44.545 18:43:33 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:32:44.545 18:43:33 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:32:44.545 18:43:33 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:32:44.545 18:43:33 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:32:44.545 18:43:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:32:44.545 18:43:33 -- scripts/common.sh@336 -- $ read -ra ver1 00:32:44.545 18:43:33 -- scripts/common.sh@337 -- $ IFS=.-: 00:32:44.545 18:43:33 -- scripts/common.sh@337 -- $ read -ra ver2 00:32:44.545 18:43:33 -- scripts/common.sh@338 -- $ local 'op=<' 00:32:44.545 18:43:33 -- scripts/common.sh@340 -- $ ver1_l=2 00:32:44.545 18:43:33 -- scripts/common.sh@341 -- $ ver2_l=1 00:32:44.545 18:43:33 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:32:44.545 18:43:33 -- scripts/common.sh@344 -- $ case "$op" in 00:32:44.545 18:43:33 -- scripts/common.sh@345 -- $ : 1 00:32:44.545 18:43:33 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:32:44.545 18:43:33 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.545 18:43:33 -- scripts/common.sh@365 -- $ decimal 1 00:32:44.545 18:43:33 -- scripts/common.sh@353 -- $ local d=1 00:32:44.545 18:43:33 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:32:44.545 18:43:33 -- scripts/common.sh@355 -- $ echo 1 00:32:44.545 18:43:33 -- scripts/common.sh@365 -- $ ver1[v]=1 00:32:44.545 18:43:33 -- scripts/common.sh@366 -- $ decimal 2 00:32:44.545 18:43:33 -- scripts/common.sh@353 -- $ local d=2 00:32:44.545 18:43:33 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:32:44.545 18:43:33 -- scripts/common.sh@355 -- $ echo 2 00:32:44.545 18:43:33 -- scripts/common.sh@366 -- $ ver2[v]=2 00:32:44.545 18:43:33 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:32:44.545 18:43:33 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:32:44.545 18:43:33 -- scripts/common.sh@368 -- $ return 0 00:32:44.545 18:43:33 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.545 18:43:33 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:32:44.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.545 --rc genhtml_branch_coverage=1 00:32:44.545 --rc genhtml_function_coverage=1 00:32:44.545 --rc genhtml_legend=1 00:32:44.545 --rc geninfo_all_blocks=1 00:32:44.545 --rc geninfo_unexecuted_blocks=1 00:32:44.545 00:32:44.545 ' 00:32:44.545 18:43:33 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:32:44.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.545 --rc genhtml_branch_coverage=1 00:32:44.545 --rc genhtml_function_coverage=1 00:32:44.545 --rc genhtml_legend=1 00:32:44.545 --rc geninfo_all_blocks=1 00:32:44.545 --rc geninfo_unexecuted_blocks=1 00:32:44.545 00:32:44.545 ' 00:32:44.545 18:43:33 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:32:44.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.545 --rc genhtml_branch_coverage=1 00:32:44.545 --rc genhtml_function_coverage=1 00:32:44.545 --rc genhtml_legend=1 00:32:44.545 --rc geninfo_all_blocks=1 00:32:44.545 --rc geninfo_unexecuted_blocks=1 00:32:44.545 00:32:44.545 ' 00:32:44.545 18:43:33 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:32:44.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.545 --rc genhtml_branch_coverage=1 00:32:44.545 --rc genhtml_function_coverage=1 00:32:44.545 --rc genhtml_legend=1 00:32:44.545 --rc geninfo_all_blocks=1 00:32:44.545 --rc geninfo_unexecuted_blocks=1 00:32:44.545 00:32:44.545 ' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:44.545 18:43:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:32:44.545 18:43:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:44.545 18:43:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.545 18:43:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.545 18:43:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.545 18:43:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.545 18:43:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.545 18:43:33 -- paths/export.sh@5 -- $ export PATH 00:32:44.545 18:43:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.545 18:43:33 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:44.545 18:43:33 -- common/autobuild_common.sh@486 -- $ date +%s 00:32:44.545 18:43:33 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728413013.XXXXXX 00:32:44.545 18:43:33 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728413013.yDPXs0 00:32:44.545 18:43:33 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:32:44.545 18:43:33 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:32:44.545 18:43:33 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@502 -- $ get_config_params 00:32:44.545 18:43:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:32:44.545 18:43:33 -- common/autotest_common.sh@10 -- $ set +x 00:32:44.545 18:43:33 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:32:44.545 18:43:33 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:32:44.545 18:43:33 -- pm/common@17 -- $ local monitor 00:32:44.545 18:43:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:44.545 18:43:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:44.545 18:43:33 -- pm/common@25 -- $ sleep 1 00:32:44.545 18:43:33 -- pm/common@21 -- $ date +%s 00:32:44.545 18:43:33 -- pm/common@21 -- $ date +%s 00:32:44.545 18:43:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728413013 00:32:44.545 18:43:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728413013 00:32:44.545 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728413013_collect-cpu-load.pm.log 00:32:44.545 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728413013_collect-vmstat.pm.log 00:32:45.480 18:43:34 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:32:45.480 18:43:34 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:32:45.480 18:43:34 -- spdk/autopackage.sh@14 -- $ timing_finish 00:32:45.480 18:43:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:45.480 18:43:34 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:45.480 18:43:34 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:45.738 18:43:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:45.738 18:43:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:45.738 18:43:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:45.738 18:43:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:45.738 18:43:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:32:45.738 18:43:34 -- pm/common@44 -- $ pid=97731 00:32:45.738 18:43:34 -- pm/common@50 -- $ kill -TERM 97731 00:32:45.738 18:43:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:45.738 18:43:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:32:45.738 18:43:34 -- pm/common@44 -- $ pid=97732 00:32:45.738 18:43:34 -- pm/common@50 -- $ kill -TERM 97732 00:32:45.738 + [[ -n 5771 ]] 00:32:45.738 + sudo kill 5771 00:32:45.746 [Pipeline] } 00:32:45.762 [Pipeline] // timeout 00:32:45.768 [Pipeline] } 00:32:45.784 [Pipeline] // stage 00:32:45.791 [Pipeline] } 00:32:45.805 [Pipeline] // catchError 00:32:45.815 [Pipeline] stage 00:32:45.817 [Pipeline] { (Stop VM) 00:32:45.831 [Pipeline] sh 00:32:46.109 + vagrant halt 00:32:48.641 ==> default: Halting domain... 00:32:53.918 [Pipeline] sh 00:32:54.196 + vagrant destroy -f 00:32:56.719 ==> default: Removing domain... 00:32:56.985 [Pipeline] sh 00:32:57.258 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:57.265 [Pipeline] } 00:32:57.280 [Pipeline] // stage 00:32:57.286 [Pipeline] } 00:32:57.301 [Pipeline] // dir 00:32:57.306 [Pipeline] } 00:32:57.321 [Pipeline] // wrap 00:32:57.327 [Pipeline] } 00:32:57.340 [Pipeline] // catchError 00:32:57.350 [Pipeline] stage 00:32:57.352 [Pipeline] { (Epilogue) 00:32:57.364 [Pipeline] sh 00:32:57.640 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:04.238 [Pipeline] catchError 00:33:04.240 [Pipeline] { 00:33:04.252 [Pipeline] sh 00:33:04.531 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:04.531 Artifacts sizes are good 00:33:04.539 [Pipeline] } 00:33:04.556 [Pipeline] // catchError 00:33:04.570 [Pipeline] archiveArtifacts 00:33:04.618 Archiving artifacts 00:33:04.720 [Pipeline] cleanWs 00:33:04.731 [WS-CLEANUP] Deleting project workspace... 00:33:04.731 [WS-CLEANUP] Deferred wipeout is used... 00:33:04.737 [WS-CLEANUP] done 00:33:04.739 [Pipeline] } 00:33:04.755 [Pipeline] // stage 00:33:04.760 [Pipeline] } 00:33:04.774 [Pipeline] // node 00:33:04.780 [Pipeline] End of Pipeline 00:33:04.810 Finished: SUCCESS